var/home/core/zuul-output/0000755000175000017500000000000015135257135014534 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015135263542015477 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000224733015135263464020272 0ustar corecore4guikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs$r.k9GfD )?YN-俪|-ş" ^^^ӟx7՚jW$^|1Fbg_>cV*˿mVˋ^<~UWy]L-͗_pU_P|Xûx{AtW~3 _P/&R/xDy~rJ_/*ofXx$%X"LADA@@tgV~.}-+zvy J+WF^i4JpOO pzM6/vs?}fVj6'p~U Pm,UTV̙UΞg\ Ӵ-$}.Uۙއ0* T(-aD~J'`:R߿fKS'oowHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO# #D"-bFg4*%3`C\LtiKgz֝$,:;zuL{+>2^G) u.`l(Sm&F4a0>eBmFR5]!PI6f٘"y/(":[#;`1}+V 3'ϨF&%8'# $9b"r>B)GF%\bi/ Ff/Bp 4YH~\EZ~^߹3- o^nN^iL[NۅbbٞE)~IGGAj^3}wy{4ߙouxXkLFyS \zkQumUi_c [Adt:yG "'P8[aNw ȺwZfL6#Ύȟ Kdg?y7| &#)3+o^335R>!5*XCLn* w}ƕHs#FLzsљ Xߛk׹1{,w4ߛn#(vZBΚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMޮi#2j9iݸ6C~z+_Ex$L}*%h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacom_jZJ%PgS!]}[7ߜQZ݇~Y;ufʕ"uZ0EyT0: =XTy-nhI׼&q]#v0nFNV-9JϲdK\D2s&[#bE(mV9ىN廋;ɫߖe1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /癗H7c&)ߊ`z_Z{5v7xniP/CW/uU%fS_((G yKioO'3Pm:mV_g`c46A>hPr0ιӦg q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^7rN_Ŗ׼O>Bߔ)bQ) <4G0 C.iT1EZ{(•uZZg !M)a(!Hw/?R?Q~}5 wY}:fs%1KTAA\cCȾ39h®3uO0T-Oe+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & " ځ9g ?{ j럚Sř>_uw`C}-{C):fUr6gmSΟ1c/GTA#!'YP>Qwf8*c4˥Ęk(+,«.c%_~&N%80=1Jgͤ39(&ʤdH0Ζ@.!)}G2p cL1%'4-1a_`[[z㧦lk˭c Ěϕρ_} Uwt `~ߛUIvl.4`P{d056 5w}'9vh;l$>'ct Հ}a>-:(QxPyA Z UcÖgڌ:8cΗ|U1,-N9 @$;T-n,'}6ȴ .#Sq9}5zoX#ZVOy4%-Lq6d b}O$B#fethBE;1"l r  B+R6Qp%;R8P󦟶Ub-L::d\;ELO3芷AgX*;RgXGdCgX JgX2*Ъ3:O7ǭ3ږA :}d,ZByX/&Ksg3["66hŢFD&iQCFd4%h= ztKmdߟ9i {A.:M {bZo:Xko;$UYwS1dӧl 5Yp$'}Zv"ꒄℬT ٪ȿ$jXWFI#R޸B4vOL-LIP E&G`JS[l52, 5 CۈP$0Zg=+DJ%D  *NpJ֊iTn!tT̅Rhɇ ќuޏ¢6}#LpFD58LQ Lf~/EOFZ2ޙ-did˥]5]5᪩QJlyIPEQZȰ<'̌R! I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%Da,Gܠ*qI@qlG6s푻jÝ$ >8ȕ$eZ1j[h0SH,qf<"${/ksBK}xnwDb%M6:K<~̓9*u᛹Q{FЖt~6S#G1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd쿽d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^oHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PV?Yg"ź)\3mf|ܔMUiU|Ym! #'ukMmQ9Blm]TO1ba.XW x6ܠ9[v35H;-]Um4mMrW-k#~fؤϋu_j*^Wj^qM `-Pk.@%=X#|ۡb1lKcj$׋bKv[~"N jS4HOkeF3LPyi︅iWk! cAnxu6<7cp?WN $?X3l(?  'Z! ,Z.maO_Bk/m~ޖ(<qRfR"Au\PmLZ"twpuJ` mvf+T!6Ѓjw1ncuwo':o gSPC=]U҅yY9 &K<-na'Xk,P4+`Þ/lX/bjFO.= w ?>ȑ3n߿z,t s5Z/ Clo-` z?a~b mzkC zFȏ>1k*Dls6vP9hS  ehC.3 @6ijvUuBY hBnb[ Fr#D7ćlA!:X lYE>#0JvʈɌ|\u,'Y˲.,;oOwoj-25Hݻ7 li0bSlbw=IsxhRbd+I]Y]JP}@.供SЃ??w w@KvKts[TSa /ZaDžPAEư07>~w3n:U/.P珀Yaٳ5Ʈ]խ4 ~fh.8C>n@T%W?%TbzK-6cb:XeGL`'žeVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bHȌsK+D"̽E/"Icƀsu0,gy(&TI{ U܋N5 l͖h"褁lm *#n/Q!m b0X3i)\IN˭% Y&cKoG w 9pM^WϋQf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,* ] O;*X]Eg,5,ouZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|Rxxl3{c` 1nhJzQHv?hbºܞz=73qSO0}Dc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?*w*hy4ޙM^٫wF(p]EwQzr*! 5F XrO7E[!gJ^.a&HߣaaQÝ$_vyz4}0!yܒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+bQ̘TT!kw/nb͵FcRG0xeO sw5TV12R7<OG5cjShGg/5TbW > ]~Wޠ9dNiee$V[\[Qp-&u~a+3~;xUFFW>'ǣC~방u)т48ZdH;j a]`bGԹ#qiP(yڤ~dO@wA[Vz/$NW\F?H4kX6)F*1*(eJAaݡ krqB}q^fn 8y7P  GRޠkQn>eqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)Fa/TTmCԤ{"9b{ywSXE*m#3U ùRIvޏrJ`k|wJKH:O*OKy`( ݢe*{ ua ȻݔhvOkU~OǠI/aǕ-JMX _.6KsjA Qsmd  O#F.Uf28ZAgy>y,d$C?v01q5e.Um>]RLa&r?+@6k&#l)I5_> ` D s5npo}/ؙq #a2V?X~.4O/'|/_|&q̑0dd4>vk 60D _o~[Sw3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^=xZb [_tܡ&yЋ{ Sym^?̑sU~' Ԓ f\itu)b>5X +mj(^>c/έex^k$# p|+Jbȣ92(ȐP'3ޜb6xo^fmIx nf^L7>"0(HKkD4<80: M:'֥P!r"oL㣣@ 9n# " $fGgKQӦ4}Gn\^=.Y5PI dPN6 Ozځפ5) F[ڣ$2*%&h v%9HN H~Q+oiw?{.۳)-nqM?2ސv/3,9ҮT9Cef˭49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>n1qaY4ͬlyg "]Bv9-9`Te'õII kюHLa^b*/H^FFIu`2a$mc Ry;R:LڕDܓ>Y=:]t.+|PTz`Zbym gp8펠هynpEOkÈWȤMف 8o?))H;h8_ߍ5S&(w9Z,K44|MZ'0-bB$ !)6@I<#`L8턻r\K8VKĸO Hx?tythŜEǴfW I8!mR94G t̆xɯAߙRўń ]T!n Rqi3/2 0dŧUO"n2R*r<8mmN_ղwT@p +jV&P/W$Q ?I@> kV0φŜxtADx"Xh4|>XSxߵă@pE:y]/wDQ1&HǓb!n[mi3)WfsF:M"uҷos.1!뾧1%s,hQshx̗3%*v9(I;:'>uQ+v)vR/egBhAAdh]4H:nQ$tHI98/)=mͭ ڐn}uzٻWw ggޝ7ĩDRc0]rY9'z .(yYqYj5p=%Rwó^;n1z"8 Po߽]V!ye]VZRԾ|“qNpѓVZD2"VN-m2do9 'H*ILۗ>QKҰ]Lofj8dqV+ށ*/fC Q`B 6i^SL[bJgW^zA42iG.y3bBA{kpR A ?IYY ?|-z#}~f ‰dŷ-р,m7VyIwGHέ 2^wߜޚM{BL\#a s.=\|.立=#uL#]  GE|FKi3&,ۣxݍl0HڜHLK'Cw#)krSIK<1څ 9abHl:b3L7jOq͂Ӥ=u8#E2;|z꽐fрi^lTlt␚ɓW%OVc8|*yI0U=ޝ P?&pWt|CX6o|,9:N\2x|vo|mv}r:"KiyGҪ$& hw#4_pv?\Хfm_Ov^Ҷk6j3ZN9t9ZMMM)I[RNC|䳮yI3MڼH9iEG&V Gx`u.̀ab_ XC.;o;oX]}:3Kn0R|WD\hnZ:|}Wv8|۾~Us?yWjv5>עxqRҧH-EeJ~0YIozy:noq V{q8sϷHoO͗zf ̙eX-4`T*lA" &;1;O]-wgϊ)h&i'"/ͤqr@8!̴G~_Rkk ^|9-R )҈ ^XKL =Z@ >lN%hwioiUsIA8Y&=Xu~4Le͢ }UVM)A`9Ɗې#v΀YK{dHi|}Ǘ5~UZFqЮc+cxz 6ZduL& V 5I bHb3L7h!ޒh7YJt*CiNÄFKKùMt}.l]El>Ms|//fׇ&!B ;&g\,}F)L b߀Mi6Õw7O{Gqrfz3_P !J8T<2!)^_O.\{-d)K|l1햐Z1WMʜ5$)M0Lʳsw1S^vxp VȖA+L܁,҂+sM/y)_Nvc*@k]ן;tϫyoxӻo_nfr6ИrҊ߳b*bjTc?E K\gd{(ZHCxn5w.4\h(`dc)]!Kqi4~Gvp!9V>&M!s}ٝ&%g3?E#%`N)p#5YUhN ɨڹ#Ch@(R &Z+<ݰb/st9&yo|BL,1+tkn2ӗQ7 }<ˉzRfQ*yMK*"_R[كrq IH!6=Ocnи%G۟"|ؔ^K_y׏<:n:!d#[7^.hd}ӾP'k2MؤYe/{!ca g/^wT j˚ب|ML?E3} 2ӗh$cdgK?uy̦WmÃ8ph?}z]W_M.qKbvJ> QA^"fY0_8`N O7{b_u٭}3yMaF8RŔ.MMWrwO›HzC7ݴ]MoLoƣxxy23(h>1UW/{&Ά+4*Iqt~L4Ykja?BHV}m|X!lk҃=pnUגZ6p| G;;74^l{Pclwů Հ}xcSu)6fbM/R(*ȴd.^Qw %"=nluOeH=t) Hİd/D!-Ɩ:;v8`vU~Ʉ!hX #'$2j1ܒZ˜bK@*`*#QA 9WykGk,8}B6{/) ݆Y~ 1;;|,ۇ=sxy+@{l/*+E2}`pNU`ZS̯窜qN8V ['4d!FmaX-6 y:1V(!L7,RPEd;)QϢ +RlWDžuF7LFֆoM~ar*EtIbW>jqour?qzJJaQ#-n`/$fhnqgTĔO5 ꐌSYXzv9[ezksA`<dkON৯s|&*pNaJه5B5H:W2% `6MRR'xZtfC$1aH_dx$1'/v^ZZ4`9);q`F"d1v>ժbLGd~MP%m x52LMF9 E"A,S Vo}\"X.2< 5FB΢u.`aJ#Tk’"D#cuCXȉ4 ՖK(KP|dZ1&8{9rLnMRф%V Ng2K|`ot.GSGd oE'!B'Nb1{8LW^9KbN;sö!`0ݘ/l+1L#B8U֕&*?V6N{դ}Y(INBKhx2 *MOenT.a~.E jG)j{=u^K+Ȫcv/w#MivX :)ǪCZUnAS`SK6OSxa3 W; K>窜̀'n 3u0?K@BS %fee}i]>̤+*l:\歶!IZ5>H;0)N.w7ߍ|+qUߤ^oå~4en\.cY[s'wSSۘf ?.D s}Y~/J[}jX^ޗ_-/̍ݥ*n./cus}]\>\\^'W_nAqC_oO-S_sOq?B}mmK2/@DJt}=xL@5MG0ZY,\S Eb uw:YɊ|ZԘ8'ˠ*>q/E b\ R%.aS qY>W Rlz!>Z.|<VD h5^6eM>y̆@ x>Lh!*<-lo_V684A飑i2#@+j3l૎S1@:G|gRcƈ?H(m>LC,HI~'.Op% ' c*Dp*cj|>z G` |]e*:nq!`{ qBAgPSO}E`́JPu#]' 3N+;fwt[wL X1!;W$*죓Ha-s>Vzk[~S_vD.yΕ`h9U|A܌ЃECTC Tnpצho!=V qy)U cigs^>sgv"4N9W_iI NRCǔd X1Lb.u@`X]nl}!:ViI[/SE un޷(ȊD0M^`MDN74Т C>F-}$A:XBgJWq&4ۓflq6TX)ى?Nwg>]dt*?Ű~{N_w7p682~ =WBX"XA:#u-9`x 92$4_>9WvTIj`+C2"s%DƖ|2H\2+AaTaBˮ}L@dr_Wfc>IdA Od[jlec=XJ|&+-T1m8NP$%s,ig\Z:h Ћ߉n!r}_\ \5 6 d#=&X^-kOwĝJO\Vj; )!eoB4F\jtctUb.L[3M8V|&jZz/@7aV),A[5TpUZL_?CU0E [%W%vl x٘3܎y,< )i7 Ո: tC`\?c%v7\Ct!$9iç$><+c~݊lz1H[E'2/clQ.I`AWOlw&5fH n`gMytdx)lwAK~GgbJI-tq5/i ?WǠr^C/1NEU<=co(k0Q~wˌ\g,\ rf\PUH,L#L7E"`0dq@zn~+CX|,l_B'9Dcuu|~z+G q|-bb^HcUha9ce1P[;qsA.Ǎ-]W‹y?ڕ^Pm:>I+Ȧ6' ,}U=̀*Eg.6_~OJ/8V ?ç&+|t><,BLqL򱷬dS{X6"X#-^䀕#{К4i̎'QIc(<ǩJi lc*n;?7ndW_Kԭ]n,cg2FI*uX[x/)ުĊ&vbHV9Ɉ%dbQ4.2'[Jd o$MbJ]$]i2?V5ȹ0.< *D!TQETSicS-1VE)E<ήGk=hWe>o6sIr< '*L&B2?Z5'Gk)X"StF bqxXUẞsJ*֓A^Nsl'_WW%z 1>8(hӼ^<)/Qi] b0.*hp^ /o풲Gnr__jy//L|e4W}Ҝp󲸲u ~nnw\y/+҄WC]sÁu1?mu,l 5lv Ct[B_KU͵dEɳ1vC[{;|Y cJaV-zTJ:,{- /3jœc/2 kcש} n =PxA:KOvK8kۮ#)4 $}OM UMT* p }}#wS3ÚL׶K4fz/Q KX#P^ƀq4Tnc;K k#˵uX}#{iqLmC˱_"#_*t[^ro1nEP|>B%Rd2yB dqٗqKZ.١3mL 3ynjBql2 K34Piفfje-32W}eƨq}n:=_O&ly#"bP[]qxEi2-/ `<"K! olL\ aX2db>(ʊ54ȳy"H'ȁ)^ETNr^A|b2ZI\@aM\u1ʖzF2 1YTU^ Yr֌%oqXқGIbXjc铥:YRUJP/?JyVY",&X_`:3c`!J'a]ĝ?Y5Zu\]y_8(ly)s^x.G_ 1oꪆ t5d1%h NQč 4ci1l)g\i U]zٖe Gm<È#~)YwL'Y EeQ(d&aq읱di⤱SET].B]4-4DlgDGme bIB.A;b oϬ83940j 1/Cvב:78&qȈ]Mrb$ eIE,^80nP鱨.AyV6ݑyik6^/ivG'-Ã}0 ǟ0X&uj׃3ml,w wV'凬Uoô;ΣOGѼKҩh?=L$P#{Hjp\W<M>=hvHceͷt6AR M,NPj(!xsr+ %B=3w3]<ሧ=0:JXFEUh((W9 # sN$ƿ!j xp |^@nfYcmk? {h!.5( 撸|HNCpEp q @X9 4ϒ;GsE|BKʹOw;8T0./Y]7s.Ab.pt&M]ѱR4y㨽A%!W"88|15sFvym鬌GDc0rt>f Dٍw+1UQS 7yoEV)EF\'B0uw˛n=jǠ.8'%Fbߑ ȎS+Kuqӏz |^@jC#?ѨSAK/:yDDq{ͧbyeKtiy<^&~{ބRB|E>N5q[ LYdu)΅ѳP.x N\ @Mp-J-&9qeEsoŋfOӝfO#w#o&"{DZ vȂafjM&Ӣԗ2V:7lun7FE!h-7e-,B0oY N%-YvyE8.7D6toR4dZۇJ"1i:҇4qn<+"dYΟTLELjeWW6,Gl<ڲ(UdΛ$Ӵ4WDw{ژdZJY:7uk6[8՚O5M5ݞj6-6K<,[5l%ZUz[Xt}_ʳ(/~FeoNmLF3P̖9wt{zbXԙFnSc90EL#þ\uc1Co4Jhkw:b?Oe{0gحgoJ _`k+`aCjtz۾k0 |ȊOor(mX}=64Ll }Y,N]҃2֣`E" Ƨ4?e[ZL X@v߰T BHOZF""qLm4JyvBJזmtP-Nh(Nh~Ҟlݣd :3)ڧ|NFhރ&&pυչC Bp1Oė^-1 Mf۠]_^eѽ-$ݤ^7@'N>=d;?{ܶ6uwK:]JDJVrUKǩ߯{fH!5ܭ$0=@pGwQWx' >t|I?>NSqy6E}㺎c:=,O@>b1^睜[*tڕJ ADGMI@(^,rʣ8oC(vלK;Px~Lʈ5̾okZCAJ`т/cJK[ns 2f%#` 4qSrGI}3(C Sx5$؜G^Qb=B˔$̂V'IM<@\ x[ݎtپR,\UON߀B14<7"=6HԇV0vSA nv C7l/@UyxƄǰohɐ ƣa%Cô'hSBz S-8$|>-`] P⏢ 2r}{T1`6*2MmC+ihׄtLQUד>j@[ŮkyX<\X&D V(+U-n-Ujo\kW*iC XŸ2&|˃n9h0L9;ɚVaM#j鋹#Wc8nЅE&G7e95%'WyRo9pgo, v.~ L~hzY:)GA - Vq 3Ҷ * Cg׳|TOJOo2Xfn΋{;t*}a]nկUҵ/۝;w5Ԛ[knO2R)5 jjmA=v$ڂPM/#ޑP{ B6:꼌PgGB-uۄ[nO2B u kmA=v$ۂPM/#ߑP B6`GB5.bj }N͵/BQgT8˳dqwנ8OGv]V%NnDm1@GH15"3eZr4}5)}쎿ELH~Fϸ7t78{?<-?Tu ޟyLˬJ/Y>`̇KbSؠaRwtaP6I9r~vEWmSWY6fV\<^(L"`q9<mif??42+EGSAJ j- E`7W-NM]/H5Mc ̷6:]?+a7a|^4- wR HFPEF?zZNXFR%n @gq:{wԂ' L*%ZA[ˋ `LJKrSRmlâo2e`X4\`Vp>}ioQQ) i"ae|>/79eK$-O0*gܛ4po?qg--OACa7߬PQ wIm_Qyܷkm1aC0FB?Q vi< ko@NJߕ23~ /pH62.hQ = f4!Hblh\,'EX~lV UCcShX-Az!=[LLLKEr%/$6Y.,Lp% RXB2^_τd<9T3e*jWi6g 'SN)pæk)a2e50%q&nm܂Ci4=Zu_(uqP꣧V׸*ӛ }}Pta:Fĥ_͘_T 93g蠘:_xY<ڷuDHd̈́quxUP5k!xTDR=1!,N%b0+ |X0b D6BGD%@ +?ߠjE3tO)`JUƒ: $ݦr6LF}B|0L7 5-P*Rp*KLƥ.xGPVU#{>P9{\FIRƒ'T ND RCCx;-eQ;ũڳ(ߨ##qu]pEjȵL!`@rݎ4&.~D>ÓJԘɴ<}n'iH)]u|Zi1bzdw2R,cX\k6y=f#[ fDamUQ}_NSPe|3)PFa;L:$iˢN.O%|:“j rtŭpxh-%c24khcJ]aHo$aS߅ڸC8,*!pVǑP8` PI#6| F^ /b3%xi~yl?.Q X>Icפ 08?%_E{w dݨavШkKVKȘ¿?iPaV7+4+(J JcXVhWH{ =(ѸÍ=ֻL,B1'QL k)n<+oܫ=8g64Ӱ=H'殱8O𣆲9XzK l1-N[ܮG[o]&K(ģtͣ9cаԄĴBJa=#7~~`%5׸k]W,2wοȍzXՉYzDv:}Ҙh]]:@ro`4&bt}_d|n(R"yȣU;"X퉧~ YOuC/@a /) u€Z#@y!AmƎ4ޒ"C70(a9eI$f_.ݯޠ(d{]`1-_޷EyӰTw on l|Ĵ)T_07d\}zʢE&6]V7xOjc*|8'o}=ݷx ~o?(X29pz/ʡ~v udip6ב<8B+);-W+Ɗ.VcFsߎfHnCI#Ky/:ps|GTBw.I5h(ct=k?S$6r9pL1bВ6`ːTs.g{g$x#њEt?Ho.jh70I:TFv[gSƇeniRMA8wc?n16 ܽmsbrM*15eG[ dk2n-eף18lу B7h2EnTǒH__ѷP8j!mt)dg ͪT9]oX[f1i#nW]i^ yowG7XU 2> @B;Zz).ӎ2E<|dG$?ΊآEqxM\5isIX\ѐ1"tk)tՈՊAHH_dI ֭Mf¡MXV ٨ɵO9֎!KjR!> `Pmnɂ YΦܕ:`bfdʺV AVj RJ6j 8;ۅQ%(;Nf jLPuִ*\CC4GИ}dꐖfrg k1jڂ)@BYt}fE(fX:/yCC1RkI7zG/ŪeHr5:qF(7֨&tA0hlsAXa;A҅ʣvrd:@b trH<*Ҕ" fTF~ i}x3,0}/,1l[R\iU^WݘDÑ"Xx1H]v&L37'c[Hjv0U6E {0عJ(Q̣2K8*hQNs)`37,: *"$UI>^>Ȃk!@ e^f-$)vR$8M\F[1XC<Һ)1HܗaC`vho3 pFEBa;FY[MLqhE )AI+=>xU J2iVQ՗΂5J, %y3ysv(?~gAprEQݍ{zeqٵҜYmdU@q  E;1DH/,8u<Xxs)P ?nE)#W'}. Rσ<-&|Uoe6ZT(a[T&YbA~Re˂cXuYސsQK!E eN~4ѓoʙV w tbnu|b40H{dT1;m)J"y>VY4" %sUIR ?/A@v鰻 ēHZ7{8l)J?Ӓ0YT:^njY Q%Q|xR$]\/qWyT0%%G/Ìr,z1U:#,A%nov2r~ϛtX!3(=(oM4>c`r1*hv" o67fMl|x[~m9c,%NȂJ̦*FZ|Q"c?lrލOwշ}}_x\e @2_4_ez>m o3|~=VDlv_cG8M (g(8r2y>;sZjf9UDS>bѪUm8SVp31Lpo{P@Txs UTL٨F$]bS͗_XpTsq9Wz>D_DVؔBt//{''*gҡrn6 wHz?[prKY(- @걃٦f-HOL)jfs)>|`wD o j.dJtO`'7I30g'kiĩTf% CMMWk ]1K7 vo/swz;i&e?ggwF]=OC侨k?C=y,MΫ^K'A_IՓmɨi8%<+KFLIqH# MUVPuĠNB6R Fn4 4P iƭʟY&¥r˂j;d=ힹRNO=^u9uYBʘ//~\őkы>%}1_~}Q!?Tv=l7,-;9vQŸ,j37_Xp\ie`˗烮0pŒ^DM>l0mjoRXx([05QAjP3XeQyp|Z#1PLxh WZZk_wޮ;#S|?vŋ {"J{OFֲ^SJ4a$=[HRJ))bXYܼ<:{3vɃRt)'nPLz3/7abۿD*r`f7$9ST2jICyO=Tَ̛\ Υ>7Fej2PD$ٴ.UMI|>N~f_~ALf.BB:z1T'cUE#_Kv#Qo Ưvr꺦N[K6O[ o@3>vZasnxNlI|rI]0|̂^ײEBǰW'[=>s{ Wm5fdjK^pLJS\p(ۤwmgR䉋)issM*/)z>{'Sю:Y@0悰V+Q$_vpz+>7#pf1?Bקۇ Kp5zˆEeaG:~x''e Y;q*kWm/bdʪ e_ [h@)сBDgvEdC-Iyܧ<|-~?Uvg,HΎPYhQYLdip)ܭ1} Q:o#a=KXU 4VA)leݛ F+_[x,>_OSozݣ?Ԙxr|oe G;幍H7sOaNFןg~WP?jxO޵?m+Cs};;s\OM`[(% (َ,SI;Ԗ% X~\ a~6>GY4p~\|e]IU$f63 5jeWb[AyX|A0xLt<wz5HP8d&͒! Ph^yԃ{%G?Mzeᴂ'?O?4*RJ㓼l ߘO}3q.=;=i"=d~^Gࠆ~Zf&\!rCIf`d /fWLQ@l6)!Gx 0wцv| #@$IBF@,hlEp-F#T&\s&„ JNJ6Wؾ73{qwBC=o-*Ѻڡe:A+0_$&AeO,Qs\ / DmBa9z$x ame4`B8GdMV=v>6z69!~8ћ^YZO,uu!Y5zS`}KB_d2-0y;R{TFou9W"WTUe֔$l0t]r** UiߦIj)_spiY &qO]8H̭P_ Jݜri fe쿕oW1Q2òLpTU o|O;/8;U7UmfQ?+[*vZaƎcջ jdDM~}ME,_}m$dnW)!8kG''$ _>({ tsg\咇h˳atzpay$'RJ/uW/.ϭ8_:Hꦇ}GSm9Ƹ<,1iG k:*(~WAEq-0b4U"H5Zuh]5$QgW]`1)Rrġ=c1k XtPua3v)6ltG\uxtx~o!-Ÿ[@)5PF;d L]8;>&jr'.8c:5R]~thĢm$]Ix{J!*\'h0Tc4'{|0H>Pq k#HLEɢ 8߅Z|vQѬ23Chؠp̔aUѹpr#Z>)^<yJG]*Y^u1.f 0c) +ý:,܅vn&&ljy+vwzY^hfskzTCjಲwC-RDٌbKʼnf4qsFYWɚoWɮXPX=w}J:"sKH *dXf(sTu߮fA;\N;t{lCN@wH)/),3奁tCHM?Q?: JlwJ %nAD0Վz4ϬZJapʬVJՆ/Ut@umݨ-=%!fmZ]R6~]!\ؚ~J-Fm8-EO*.l%?];nZ0a el[0Aּ|&%!VOezILJzBgvBrdN+aâ萴1&e-zp+՝ 4WlC}8Xډ\xygh\ϦE'qQnRI8;>JO͠OƊb(qu7) ,}s]P=U쁠2uWpP:x<9*Nds Јg<4Hk4@/jKp>Ge`J4lZൟ2) ._ݱ߶?E֯mgI$y/uC\=rtG+r]{4"ۥZh |їڛ0+āsTYϧ0^49 Fl0?~3}@9֖f{LgpAIՎA,l2=g%heL g\磘@_^z]Ww>+:^VV3X}/uN$GqpFQss!P޶;)c5\[zՏ/¯?L|%D)w t>-߷~ٓqdi*q)jq*yƹghb{/q '2Q)-Y_<u$L}_md-kٴx.o8;;ly=9p;(&_gyzUq ,C^`mRePAgTq&H̔r c)]= ͨw5 JڸI˺ʅ)HrPf{B4 #քy77G5.7[rхARlq#n9ΤJ1˜1i%Rs{ghQ#h)H$_66U6Lb_tijhNvf $Q3|8Xښez)g4T C?-!•~J+'[owY/93 %iEO_a>;}}9+۬HuhMBTV Y2E.ofЃ[G JVBv"EE~> G_{8o8l>Q V0`F5op*>mWb7g?~}+-hmASop'iϝwZ+ZAWL!E,tHBHRDfh8jK2䇑q<\PRjʢl~;>fC "\y| I*띀 .C,_tJ2 ~ #/b|fHJЄg^eє4cJ:h1Kn&su1(9M̀ui]nd3coHZIAp%kv(ж-ǂ k$cºgmw :D OFx瘄 - f)JT(JRLr!JГF $V(ﺲ4J$F:IfvySbVX--H KYH~>'bhBDyVd"Y:FCk#Fj*ӐZI!Pe2eΘXAxȿE3BnvGh$Gݡn%Mźʶp!3HiHFјOE0&5+6B-ɾ+oC _bP8[ J޳ZvfUlԱ1!wsbVCS@_*[%=ST?/};}}]mg~GW>rc^^|?l{a j ^-?B)癙]>7c TҨ7rj%K|x_/E#V Sql㰟HŕD/^ZVc{4E4}@{'+0>;g. ϯCݴ4(Z"\L=y(?=K0[Sv^7vl !nRSܡف NNWLiimMwwqB(5wWf\֎nl-Cr+4ɫjEŽuo"I=@m"|_/@R}ImXN\ ɦrbd`Flp>\UFh$4WO6j'XjoW_GHu G"X‚5𲷭mcQfT)ƈ8{zS2OS ;)Y$u8QI?hwHZT x4E9` AUi2aalAo+B I1[y.)% LcU~J^bn3ɵU{gͰ)XDOm$IB;LPC `d$s \<d,D<3MJd&MIÉ) p=RLsȘ(&c|5`6֌ƊBEAȄĘSłc%4c>IEb`4f %P)E|NHP|v[|"_&(EnCpSDrN#dI3SpacR,ز%<0VC9'q6@ԝޡclQB߾Y$N4r l5oSpfyXEVXV7tr/ kJ`Uxw|cw[a1ȋ b8Jc DR$=̌0똉QcQ >1?Q}iGR0K&rxמ\vLN/ !B Gh0GnyuI W@z:RtEPĮRZG"QZ"1B3J C6ENxWB#w߬||D]s+]/Ok s,au1\TKc21ZRkD`bĩ8B{o-Qmٛ>]Qi-xBR۔%(#҈(%MJ JveH-dp3Q+ț#S TEm;Zjź$_`|9Z jQHG DEj-JQ_}b* 0SWc{:X`TC;ccM $R!T45XDԺ"?5iW8b"HCZK$`4U rXQ+BHem e@ND3&SPTDvGT #9eQ2RjK*cP)p Q-Jt +̅a:dl}{=[S\oLSCVROH9`58ug_v|38=6c#jMXи:djE۟kjyVG.my:UkR) #XwGic؞pQ6eHzf%B u `,c@.A3f(x/fPUGHzri (f]ػ:`!R cOLysd=u:Knkj1>!mȮhEZu`຾dZ҈>ף+KRN\71]:XjZET;"X*P\Y-j%cxA!mp-d>)qDx|&`JڮYo-v!lVHEl)DzRNR=$X<#vczrʻ \.iÖjH-vnPbukXnHN7݆`C4Qⓠܩs2 u4.1+Zάg|m Z\7ѫÇ=)I#*&u>sEGiiƵÔb~{{<"W%;.pd44we͋G 9-В Ju2g=S&]`Dȑ{Mms$5[>ȻE_<&왇oY8Ixb^4r9PHaB(<⠫QvmBaN5l>B1PKQyHZuo7 RV̬ I^~xPp3GC;74au>>CQeWї$(7FH@&]W)V |+8XՍ@9RW04T u@t$0~9cPY0@stgU g3Pl`״:{60 3?k68[pPgXQ* ._}OiX)XtKFI^xT/֊)(xWNRɩ.7_ɮhJd՜KAoы퓬> 2J`WzJUj( 9mG**Zg%.k_oNffQߙW'n /hE0#*O-q)=k~ $RX~QTc# cq@ņ Aך֊?ܽeI~ FڎTKʤK/WrFE'$T]wQT~CtS?}mo={bz?zT ҸFi<6_z Owד9E;xҼ#ϚB_QpmGYN7^j2%WE-ؐ\ł)tʙWFj I|`T8:q$_itwӦP |cOB>fTBiCIZd2AZOQIp/j+MJGP%B").S`GM˔ J#&`_:g7*He Cs*-t2G"u'JPLZaʹ tKڊ~4LG[e*+?aR犬~?WޠBOl:7 ʝwJE,{@= 7)f%5l|F(ʲfcukS5SR+; !eWv5e)p/W({f.V5T샋Lr4L B*T)Na-~j}9!h&c{3du/䰣zМu037ƍL)¿mzA9o=Vk dTd+j-fsu<ƨx >Z_g;Y*/XTcv7~be"߭XY&Ա/g_;I/\fWŧKh],f9߬&|W'6]lN=m2@Û[Wwl^&,]Smwz2Jo[\ s1ca\ycv<2b Ƙalz3J{6-WkpY4ۢ9q ܴI/g `2NM|Og_嫆Ug&CR^$| (` 2y&9nx3X9`GdMoKο݁eIgOy;[KIOܩt xM2/4R =Ћծȅ=ʂ;P%na^$=1k+96q$!!ȆZr8kMXěh-)o-Vk0[5^Y{O4aG8L0KC2yA9 #LT#%ж0iČ>gqxk2k'5 ZN4L8dluZCMj}%Z8Ļ\< iWD-FO)_ze9#ˉ_ E,_ M7Yq!̓D2 '/OZֈrc췆د,ѫ?w:!BrT 9KQHqGv_s݄~5;n9ԈZ*~`뽇23k n3_/{f%ō;K77JpM.{^^^._܏*gՓ#KfG5(?ՔOM|ỉğ]›1<0;n t}(zsQ>Nq?qԗYdq6o凵Tw=W0>ml|uݷHl)ϧ F6YƲ;<,)O_+܄Bg7QD [w&A8)Z[0*a5 b/ļy$g1G|2i']] kɷ!T0q/xJ3%!M}Y$s!y~>vb}>6'_wqamq]acZ[*D#Z~ٺ̋ y ,,3~v-ߵ_ddd#cܗb5`Ìد5s<3r1i6Mޗ?v?^ 3瞋ٶ)Ѐ7_Ju[\>F^%Wt߸tq4#ʧ]V2ȝ? &d%⤵3+ힲx (|]idY,k.7Sg2 T:xjl5:Ta, uTa(LDRLP:]N` v;VSC;8_w'8TkLRdu! SDcmK)<"QĘJ҈Ȑ3iw Vوۜn,kzO۟%#{k{~6Ŵ mFQa73ZD)a$D)A1(7Q>8BH9DŽRĢoaT0 AX+LJXD hzM䦢uSdAӞc6@nhvu=do?Ҹ. w0UnAKSd\_O9:trK_sץǩY@ ~e<}R W9PҤH$6MCeDSPL=YUxjPgA#ٮ\دlmF,,h(chA4&zjX(e -,<$j75G- Q`ptB?T;ӆ2lqȑ!  nRcD>TyZi t ܓ47W^喪}ܗW)rK&i{lYc{E-m@LKNW_^6 ),< Tq"DX3}(H=18 G,o$IjT_1{k&j1N%N׼09þg3fufX6s1)_6'Y^ID /UNJ?,^&8'D0Xch`7:*GzhYQ>f>SR($$Kt:&J7)&5ϊ.LDdqϹeb6Hݼy,fۥ X=uU%W +$+&ߏvTSM;Mgif\lR +|,_g?_e^ g4Z.I!7p'&7 顱g9GrX>?{ݳ`-|,kgtGaƃ\E$ [H\BfynGc9GrHs'cy;Xat=rć?XԌe.wJ)#VEYbʤg-f]ZXvǷ~7Sg`dT\d>p{h.q]|,ע/5$z =`?ydntS},,\3^s">%Hk6#ꟗm3nUZ9]tk|yzhÅI;E^o 0t4\PǪ*@݀8jZNTZ4ͣE^IADhKșbI;ry/E/(q|fTOzF6dHh- P,;9irZXvonӨ=Šʏΰ[x$pE-)hEWp%z`֕2(|BOz1D:gqgHF͢u'7>+߲QYԛ̞;]/&t:]o`J Vj3f/Mo)wѪZ RLmA]A9 %4e3rH-bШygKx7>V"Pr.W47kc}%l:nJNmg 1\iMI=18 ~NEI-TߖJAB:'95e>jPU{}e 3Nt2.jCU])j! 73. X]Pcyn:i _A(0b\&љyp.Ąٷ]PnL +6+8nbl0YأVUrB^sEMkS|I=4*ΙV~+)rr(x~yLouV_d%ܹl6?,FtkʪL?@a[L+J('XO@Z(]r[NuIp" ax$dTŹ\MX4dC2Ҵsn(k'POln')x` H3q(Ώr=\>q\>2Q6lmW\(S25[2Fy^_;*5v;a +=pIRHӎ,&[إt̺2 >v Du:읎i 0kJ4$ԶVځGFw9"Д# |, p\XiyӼ<cJk%Ljz|(Z]6aL{M`F+"fpƻN;]K% |,u}Mp+M*^|ƭ}=`kШqўTkT͡sm_$6vi}A˞QC-h1Ѹg7a)ʼwz7Cx3>q{G!d4(m:WxrW4iS2Ē) |38y׉e{LZ;0o |AҴŅ Ձa~M&nsL'no-Nz,Q؛l(f>ž2.kz'Q N(`DqMK4GՔdo~p8K32 Gs~z'Q N(`Dq"|"0/ab,@v`d]A3u?8Q4s04:ѡ+6s@~lL6mV[轙w0s0XHTQWKsXdAżAsc&,MaIcl[3[Dx N3rG  ߦ, n'L N(>a~MN'?<5cB઒N`P'+NfP}2;LZ/ni9)QO/MIC" {Sn?7%r`,w1x ==iK5Oz o6?mԌ1Hc:d1:ɢ1gBJgt1㌖z&wxc%Hr:$7VX$etߦ_sF}'A=<^g#} 7C:1g Dg@i6>i1M&wm#mVAp0};ED=8i\cIYQ1#7;abuD#7q  P[~٣;8vH5P2z`5 5Wc[_٦̂s(G+Pķ(7 ]F\R6|މC' 3{[˘4<{00>heg 0a2=6ĵT(yЙf56 C9Kk#2]pC䱭N#F!q|L1z0;׻Q=j)-W8?*u)7CrHh]oc䶧fQGJ 8)n.c7||A{/E@9"`{gpF<}?y@yj=4jO*xyfEZEyE> /<LX'6@rb޷}t}4r"2 |,o'dszU0QT$i'::4_?Gٯ>U5Ut٘,-{=r35p5B7|Y?v[i[DU.O0Vvnήq'p ,>h|$wҺM$sQښb7a[.m ONOg<`(|kXSsGccW~b,Th)'# e**ETfc&?_!w[(RMk|o'.صpbqHWع(S첝[-Ą6bW MGȻ-X\naH&Kf'LF5f롱g9GrXCϨq)%Nq;['ʧ7|~FZQDzܱXX.]RsEX$(HåQp/w4sq4>Z9'l`>HV3 U|1zh;!M8;Svßi=4:'tP#Yt|״^0^% +-wSs4zhkI Gg~ ~bgL5:U&Rm3Yy_ȾE%,Ш7uXn&n Qwp$IY("Mj  7RC.M<H68%~g^{h,nHȻs? .QoJX><yБ UC604M& !Q>M8%u.n+8NΛd}W ȫ%άw]iшRk4`JC5n?Fwy1Q3eJRmQқ-jFn,7W,qb).f3t%fH:LT,t$%hFaRJ>@4P1V_`C/8'8YNkd1/'0l'əɗ|*zt3%B^J{州`X-X04>"1`'LE=6TV`ifÌB)zzJ>¼.BJ>XF]&1Er3c\JJ i2P0x)b5tx/䮒_>  ИItJfQE^^ &6\3n<Sd(Mcq\,@uvFuB@ @_qb2Qkc^( z͢LUm񎿘w{Ł__.XopqcY=7h{E qE\{Im[RT'Vi׬"x .;]r/\ B]}`26DdפW[!6Dz9 𻶀/{>Mb]̫jի"[oo_U-%*fG߿lv9ҿnalicK= q|;*(7aaydͳ<\ujYwOq~fջ]o^jOBJ]i.7=ߧ>ޭc]2-49u^t|0k /?"QנWuz譧UdSx}v@QBU0VَU.kGNI. ~o׭ H;i+r\mqvsm schݨon*%[vݑ߼z6ۗZ|3]hXLsmcw&Mgߤ=BX/WO:Ewf U-ckN2 d<Ԃ4-kZzAK 6F]+T^(˕1AP9k@jeᓿl):D SS*4^+/6kesLgT9jyR4+q\E`! Ĥ`L%FmwJgf(< ;:3w3*}N[z0l3fb V1P˭KU*0B\@fb[3hЂʃ!PVұ 2E\cZci!XkMAaʹ} nkֹ Tw}wSv"h]ӄ7@W͠t FH og?*u/߯V1m[,l#_ ?/W"oyu; X w˔PD7u}N.;7)F7_j`uB0ﺍnl+u |=B~?BDИnJL9#GO/Wia]pY[uF+k:\H[:bsD8Q*45Jm&~&,E^qUct y#k^Q7Gk\f U0uS|h?!"itSZ^yOK`NxSW_`?۞zmOrPO8֗ %/h/Qy^.veE!B+C T`,P7TO ', VX(#IH^+ 7Qԇ  xHV}_R4+tE I`DR*(Yy]JI tY3ߠbPhņQ041{a$IQxP']ę3^CQ(K6熺ƤNEULTJFnG-D?(h@ DB$y}b"FԹ#ƌ+sK㿒 yPͷ[aCɯ!{L<|3-Pfɰڹ{tY,afWp;yaʫ= EƸ UVIA=FN7bt#HQRw)#MdHYbG{)\< .z<$Ef\Gg9 UQN$*ϋ0[?@CY&6xW7XЇ  xp}M$6~ ֓^:' t f#lH˧$QG/`tۛi;/ ]W>M*;^zj@5жpKX ĥ#h/ub3 ݧE\v,(ZgA1Qdq}AS+7lJA6st PwJ\'WE|U q_uG~ e˩ɥlQB(y5!ح1ZDxhuCe%SzPhŇQ๖*LA K! TӒɸ  TؐT ;mҦ젼_gca xWR|PL}4YxHV}m:v I@ z;~~xclHy^3=́>slh6ُpc"DEƌ!s_Ϸh6c &3)[Leu2]n،/yF-T P޼p;?*Jm[eM``z8ȾRο%$TNw)/7Y@gh}ok n:|]ڤ5;툠iG 0֤5Ť7i^}9*(bӴu-FKʫ$ 'TNB"55R}_#vd׮B ǯF!(z0| n5/6തAڊ*@+8J*zSZOcb xpz@KSA[>mZꌗ O<K!cP2nkc2XD7[ y\Up٥?1m!Éy+kfTT%4t8R^`GRFtAA@3+B1'|8ud׊<\:LJ:S2AYy?N:9p悊]xI`. #Z2.:b#(*2\([;DOVˉN,)W69 Ǫ<LȲOAyu[Ud')6Ryu$,=) L1&N S)4=웩OF0ʢ4.TwA\rqfoC]QW'~5G{fl޳WX> 7 f<,dA3VyUd̂+ Eκz⧔/>)^A)0FSy3s xϳ~30Ɛ9*3痏k| :{CfQgݔȷq:6w01+KjěB%Ea8Nsa>-ϋȅCA 0<4Qęq\q.jnXH)%d@ż ;D%xHp+ 5'!f= Qia2FJJhe+*WY8 na'hᴾ~{α5bS u)#`ѫW\?|@y 0\fc8Z[ȕb/YI :mQқ#UFO3 wk%xJpqI)`^X EIVE |pg*K{߈3 $Or^]淢[' iF4~oaZߤ'U Y;b=MǸ݃':iO8W8bFKKeȆ&,%>ox5|-g35Lb)ׄ2Nshg}7_Xs˭SoXox WoFhvcYfسWΞ V\kU}컟~57jW۴VNh*ogMzfM yt}S,ׁʉN>~X33p ;Y;"+loG@s*;D9c,jϜ-:{KE6:ʸ0ѱM=t(&j)DGho| 0k< kiyTU+>|gڭB}W^˺#oD#~'D,?.b2VWcӾe01Fc9G8 Y5?߂}P_KlFuc9G8KYnX]i/)fy͘{lƁ_fӋ_U6@ߚD}Ֆ).:by=NdīVLy]-fz?{W۸1br"{{-~mQP"xKgr~%(P6̦E4c+hќ!24l8 (GړY~:i몷hR賧6'+S{DhfkHi)ɠ햰`"$=jvOULf4P9b?HJ=O3E>]륦Ya':5'/h`4C2qUa25p *fQ jr|2yRMxByZzn+lZfiY"A 85Mp.L)Gƾ7^Q8'AO*LU 4E$X/(w{qT _5 `K,e1$1H FQ.]z&0 8>24pW*Ŭ6y(<y9SGGG`M2hp/5N $S4TQI#0։B\ d2@)&59?+ʈ g9ov#ApGˇHA4}/$b]rf"lmCEc3ù((IQ)eD0#f_q* )>T!ޘ*R+Sa!2aZFdpdr ѷ G,y^HE0]ZNsui!xqQr[ iH%laqHfWD*ڵ`۷kra~}ܵWLh;N`N`X?4fNa*:FF@`5uдay~d\ Ndm92ve L iGkqIt0+Rg=[:4 )$U'­6Qs85ΘDhˣY'MعѵqY p|W0Af˫a8=2"?2ߗ}u_~L@dd'n=+w(sB0AI3b s\ LCJ18,5auao'Idf*Xf+mVCg:RAfYi72iw?2eHxsjH`lZ A"0h.!k> *##28`O3!a` `#L5*c@6G_R"EE<&b@bT :D5cp`p+<ߥw(0^٨< p7w/qȠ%Zϴċ LZ[G)$` Q|CqjʬH1N` MKa `%Hʨp0b4Ң[xb l}tɖsCE5LH,;LQ.g;<-*2T|nPRȈ =S##.8,+ROeI1QJI{U~Eo`I[$ s|=/uSz19!di5R?KΛ ڮ9]dti[ KXa@AJWN۟lXm]?w*|t2u<+B#A q]Ұ2:|lJ<(:vy7rݘO~q xTDGc:,֠VN64q9 Q ,e2A$)kحȈ K^1klt&Y ˤ,JƅvQ Hg)%0t-#28Zٕ) >\Uk.f)2E$,ZFdp d5  n#Fr$$ œ ^$mhphϑrXDWA q;8%K TK7/,ƽtS6.g\` Ӂ9Q! \pX A_("^`c !2^GFdp/ͩ"hē&H6Ţk ^cS~ܖ˥SGFdp\'`G괏hGX$Ժ^ =9e5kXlgK0Z>pcT, ]BmF|0kx kF$lMm6U vCY^2V"JsJ s^df,U/IZ0 8@멑HE"1hyPK8R(BɤH1 ع+쏈_0)'Wji%>2:+p6 %Tjxّr"IaS. x3$"wh%$KeBFscX.|nB|Fv?ᶺ?=2"uv3$gDk#Fԝey^ $ə2""gЇ^9NzdD?e8ȀtNme D$^5e _`aCWבWc&AR6 sW.wСIMM^Fqhb1^OluaÊ 6n*3E zldX<&5QVMT{Quw5褦uur {=cD#vҙQF="aه9 EڃdO,_Hqyw Wp92*oN6u [œ$iUvY=2"9/(z(eY"(MDJ 8שJ#R}zl6_̀D@PDdhH;Ǜ+9 .l-DR&S✻Xpņ ƂcK$7*oZ'%E*PV/o[A ]/>TpUGFdpTjh4y^#4@)>\(+T g4Uχ˹) T)#BqjddS9exMA0J T"_X##28Ȝ8OUB)zdz):) QssQ&.wG$@-#n%RiTǢ"EyYaʩ 쭂`)$rd< Ubl@Xeh%fr@.=2"&t7/xXQxO ,0 DIrlIScKlct_]sg5 <% W?/qqQ#e \fGFdpX8E9H{td(5.^lmBd;Ug -f3 o1pUMpd${_YG&1Ċ|r(Ȉ TVz;s 4RМ ]^Mt9^=&ɴ!s.{5o&u͛IMMŽ^^̦م{̫#9 $|POg!xxFmeon IEZ fHa'L[bs9g;4i_৅=]}$nwe"vC`|1iߩ(Νu$IIU֫3k՝} JamCkK :W.yJ<2K6&U' l]zg٨N0XWƨZr0fǼh.刱h/[Aj˛嗆i'wS B3KiA)scRê-~l[޵_[D/]:ʹ tjwTia6z^z jZ)] GIx1x%hf-uY5߼مAیs?g~9[O0o@ʥkb=.'OneÃռ<?/T\^'86槦1$M0wH +pOS]kM3i'43RLR$b-ϔ2qgshZplEuyqb6K`]eT!kǏ^9^v ^MsDaSmy2ULkƤ !6?{ףour_b$9:9:dAg}Ύ?V SxJ[T%zi+9>Tk0W%X}XE-s[6:!G\pi8@};yvcXuB.cfx`=<߆~Ǭ]ז{{z:|tȄ 6D}Z17??- #oJ?P\h7l^ dl{ }mU/n_7h?wg7t(f5;|W%t`]K":ax2y[ (1H FQ.] 0Շ?ZmxOAx8JE"%'xxe_~:yxϠ.(>^1%\:Й`OM?kzƠF64v4>2E\uD'^u{7l^kG67o2btȎeѭY۟oyM=z]0n>:C5l]0OOLLnz;bOo{K/ :p?6ҫZPmMQ)*"֕{g9#g;Ǽca*FT3DMP] 7+BcRnG`dċ61 ^Z-ut}t%!Bvw"Y{|azJ(%.,ҽKyxLhj+*сHs[6Rqn@B$V ͜" tff>`c)-9xa;s P =]M+{J K56$YrVVVƩF'UDTw"!|{;r;թx ڎ -B`T&:Q1LRyY R.Z֡+ +g˸M 4aMi;#їȵZξ-LRzj"sh*,Bs1z%npi$Ŏe!S.fߟ A Hʥz2H})_" Smaģs.e<\,)C֗:S5eK N*.v?GZYIpmyxtwwmƮ[)C)nI<⽘|IV֭sq4W܀-y~:S"]R!>=l=gaxK&5Nv4vq5)u?pݺuF{qږF0.);a,Z8Lb 쉅ҝ%ܩwYu[#XqFn2o͢L1><80}Y[^08aS48kT-*-Z$Ue,i-IF `.d'H=-Z/n.[\4Nh1.8*(maY.TB Jo ?JׄƗi Zy\0X iƼwuak pǷUҏg[lP`[)0)S`9.$@9a~ *AѺ0Sq|6v>hZ7%l䔶h^p{2c1M]fYn kɶy=T Smx+SmJTZN *tK&~ɸ&UdEJ57pQP ebe[딴LqƕdJ+?Ɨ0. Q]hwgLZ,wV0j!s<ꬦ, JYT{#Np.X~WɻSu$ԅ`n$yle ֭{FED) ewX.,$pdf4li!;#%R,,",,X6ef31z_xnkRpsڸ3)y15N3g}o4o]3瘺$LLLQ FEbs7۵E9NL$TflyFCtgYt!cB-)R6`2sP,љ gXOAAO/[/V~Jd?bhp;j\\k:Fo.J8ԫ^)Sxu‡޿,3Ø]Z&g`frTXT`#)E|FQ8ܢ0Y/-1|̈́:5;M"taˎ\Cj3#U}M16djLnИ$K*UtJc}wJU.B vo tb՞+wEy2[?rW^Ș_U˛C˱6H ) Kac)\A;! 8%1r%ozD/ @ے+0-jo~;T  Gr'Gh+O~+V D5r2o9f4@aiUl0yJ3h"E[OHe51&Fnh8ډST/I'e]'*bӉszPzĩ oJCm8t8:>MD[!Er(.7O+8qj@r_r["gd>MG եx=pbDvx=‘'qIS5/y <;_.lo:B0VΪqɲ PaE*>GZo7s'tghNu]?i:HjLcuwNo8Ert]@p=Gt](K 23"h=QZ S<=*7ZkE?%O^WMbg4d1uÓI쏆:>MQq$Y{ sJ5LG//DAS8 Lc'BSiY]!GӍgD~7M:յ0T(~tF/&\B5Z؞b KT%1', Km]({?XJc!EKpIS65`6D`Mt # 2|3x\ƿDx&F., q)G``}0;R+zf˷tzԈQϭ_QsPR<|b1!-}7L0~E l%p>Tn|bLʥΤ6fƌdzAܾ>,qx -UXnN8j8V&YuܖnHgCNpH"#e%P $d><ݎ#(M.5(9$Wxa$  v+$H*n@T޶ERRz MemAR%\>Ƌ5xm?b0O Z,od5|p`v`NYX_0𴨮EԷ6L hƙ&dkDY|!Ie5DѲΟ+pP@Xx/3iN(EBQb5$$\.%6!FF> *5( g>%ƹȳJm9W@N+rH\ĝ"F`3+JH;j@2%3N WV(<"tB]8G-V|dTf`a5S*G ;RDP}79X tYFm|P ۝O?#|P]oO kI?>owɏ6\'X5&Z XZb &gdRKD?}O{aRzms=hd?KOK'oΌ]+iK\֘h1F: lpNmojRz\b1VٽJ>x0~Ŏ'⻋Ls|dPF G(u>އao Lz>oG%|wc}ɛ~ȩĹݡTm.SnՉoNGӟNqp5hM5c8RwOOǫ$ UXƹXUG[$GuU8yڣPWC^A@1ܫm-%L_PM'iKX"[ށ'|2,6`({,tLqˢo{Ax<48 'sHg[2+h=-@Q\(wd 3`O9,^1pJn \Pt{Iagw.aOѓy4 rIWR9#dȄ'w#1a[ aWR"+.`"vVo-,9&r6s|3,r Ƹ#aRZ )$93ABƱy FLLP֮ێL|r]qHB]IS27 pw9%'eXϦ  .249eapޠFbdiM@T!!r.ɟ Jvf9 $cLK21oWūEW ξy*Є3bW5gҫT"n ,e%9]`, LeFc"GW}96-巾}7 9y"ͪ o 1W+L 9`zX($BBe,l4H-hXBȋ:[;X 3=y죊X{))qJ7E 8JrðqWsblw v:VI.asGѼd?1!0׭l,Z8vXcUT㵆4XT$y,z0Yuӛou' .ff8)%3m@,K޵q$elZC@>xf;X`wF?m%)'!U18زcG KajVV>Ngy#d늲{1]S6w-5kZv[*w$-?M=gG-׿- y%hwFIΚY6i\5WyI]gQΚ} iB_?R|f8%+`]w Y}yֽ:̴~YR>ܒهԜ7w|ǭqɺg^us쯍MZE_6yuh@N{vl 6r@&^Ms @#Nm6.DmO,[XZq٨OV_e;kUt Xc{^?e(75WO:(*~_MŝJ^`1nnƊޔwRm^[40Y̴Mɧ}2ӺyEN'0y.?W=ypyVadب}.$tJtū-ݵmM[UaT;ȏt_<[Y.k?=}U ;*&{ހ(Tx Wxe9 9wەg˔׉8oYK-'{k܋;-1 FjQgSRĶ^>cȱ`\juHDZrU0† h\ޭ2DR lyXI)U3@s&5q3-# 5kIuJPY-DH0,H fy 8*5Q7>FQ1+DLG S,Dm'EZX4VVhlIPhjj80cG6A O`}S 5S`eeD{hp^"- g8uq%jgn#ǴH lGnH \0%j^LBPrB1b8j\6x aAo WĽ6ɈؗP^bQ;(.@FsB Ku{)5c (TOz@ۭy#+7Dpj {xC+?@BfĉBRn`%].\ݕ*|b}>F|Åk=9|m,=c<p}tk`ݵ^-Td"%= DŽ))@1 4dFg~d Q'17XKch*"6=80qYmcLD"FwAV~ȏs6;9?PyWeVU|[uWڱ7?2O2sp;q:\yt{9>:8D>2@>_'z1WiYm!c:؀T{l`!QpYqV5V*=s=Y]@]3țdTHbs)hJJny҆ o"DFDt`@tT 8ed%9Xi,amDKLdY%F'g>Gջ4#EEOJM?Y.F('3 +)}tqyC >$h@82233%M79>~y%uvUsoYM|.9g`-/(v+ffw7˓7]ywms"ܠcoO{UZ7ax~] !1EN`ihZkho;4Ӽ`讶b-1(Ԅ<0o+ݏ/*ȯV;;2{2m~L;r ƉJ,8Sr!<ǰl e;)4Q·rڰb`$ 1H${,|gC=`’a8 GJ@Lz<ST.oSK9$e x|+S*Nj1z5 ﵰ3}wʻMp{ouTܷ?bz-B˔Wƒ nΝ$JC<]y3yozS@"ۼLs2GM|U ω竧l@3R3B ZjQ "RdMޤJ4V]B ڻx/=IdS(9reu ի@£(Ey#r'I06T ,P~¥aq4O^:GPV{,F))Hi>!A'&'&P1˺B>Pb 9rA&DL˓tB Q:sGU` 5~JS `B-bp)5ຌ/P~2i)Ӻ(Q!OH" #JJ(eu>+ %wK##VsƪP *io.xk%DZ/C\NBxcnZ/]K+l=+p| n_#iMmXS;c:OZ#Q!R̹e?27C۳ulv߮gMָ}:Oc)<?YXbIݏ*jG}~AA}uzQ/=:]}bNW}>e2<29)z? 68͎!ǨE`hDxt>gkk@y^@)8TDĤp !@\SCLPQ`T Kmj3,'GDj+`:ߚo{Dpe|KXXV5[r|Y6Jڛ_D-'G\T{|AQ>3AҞV-#PMExp4лlj(V@#VYFyi#t5~5#-΄AB'RZX" q rsTԏAga+X|rO! Qr% PRj, [C"s&AS#+ }I> I$`o>BɐTlg}_uHJ-vS6l]~U=@z4‚t^Ŋ5Tީn =>?; :nf_XB3b zz"|Cqo4M6M9EKn"'8KrY[\;6IMo?}u^Yp^TuwERhlUo"HA$H ![Q=YT:E4M>pṮ3FqFoj-_PLfWG;0F%^$5޶***Ṇ[JŝBˠt+h)Ud0 SF1F1n)H`H|-*'UI7Խ_ţ<ہJG^?~*hOEvWQ[)H &(=e fu9”a:ډ(%M!b.=$աa9$L6,Fbb Im;w:?x4ܗ0֣=SPҞf6V>isvF75/-`J8qnށ7$Hie)~| ³1i|kD'cz@.ͽ> |-lc{Gл2$#MP0)%V';Hd$VN!iTmuꇮN]h8A fc%OcdJpE$ %-E=5s-njҪAԿ]tK5[Cj֐v#a(vJxԀ5q`'Qp=+B;rd}LQqE7.LE)ŏ)bc`xm:& %g t3L7wمxǐYj0N:ppfz}_F(bٍd}MgLR` 0E3Bs)(ud.mKg2:ݛ &mN[ C\:$UX.=bx]sg YҠ0(`,Nqh|}*t×ldAx۫oAnxz91h>V\hu7Iu2bM -E,S++N<𙟿,Pi>-`C;Diiho4UB5IL $yKٮh]bQ4-Lv|P,@E۴e=pU):Iv|$k;P$!JV.! e # # ICGK^ϟ190s츨8  - [ItÌQ-N{1 1pL.43 ӫQMݚt@Cc:q-Ѐ*VVkKwYyKIqUt%."Mʘ+s\Ǣ_e-'2e14{UwMPF.o)@ʕ@y$"kY"Mmȶo mq$<͵*rk4?XQ-E{B{B h/-aTF{uQ]JAC@u8i:Rjsfȵ1O&S.aLPymH6j^ *oܬv-UgۂÀ=CZ9.تrB~ ꀭ]a;ܥZ'֦l7r붶UB]ŒE*\:63DOKS"!&TQsz }$ 6֝W@gi 1mf`OhZmxg9Ϩ)rˎ7#wCr ^l@J-*G[VhrU9MmU*Gk)mUb@[VhrVhr

3" zzq)a4i$7}K? :/aN(Esfͭ$G~ZeuT 皠)89&r6ȹM|Ew9lSl"V?"D4Dq7퉃Jt>qcv;J![P6sF{f=|R&qpPi5`R~iv eLpP(BiI@\p464<2 Yi(з߯ϠݬT٥IP?z$N=yڦKP7b* Ӓikҥ}E~xӭ8@nj8~K+û]4*|sg`phlbp1R}+=EE%P$'!G@q\娰|K7$x*Iv*VKJT>+Kټ66Ki U:x7m;ĸޮK<;pf iv&Qo(Of0.tMRɶ*G8˲P ^RCM*RN+}褖1)c. { 0TMmQ67yGlfIV+p|TiG(]mja+ژ(.$d/`R(ۯB>iLQWo$|%m!Bwg!{ f\#3N`W ǛxUlv7(/i-#y\>2UDzd`w in2uV{q+EHl'zD=&d /R yA GCP$XC^d#;&8zmUl1Wxj!}ZTDC"1gXlbOR[Bvf ^ffUyjfA*ɐm+qej(Fd2 λJD1QOo2@!崋R^f|AXsL2@e\vˆ' |{wh# , fG`L3vy74cC)k eX & 4v/b'調^M*:0I6øp@" P9@2<2|F,zyVR]IM'~./飯Rꉼ]f)hޝ-N;Dy=L3\vӛuqFgceGizxx5ʎ))3b228XEo4K0,0(f77,YTKK[Td1ƥ_6 dm.&ڻuduHY. h,qp3>ׇZ$zڦܙ;yCf%dP?l~!G6HvQ`B$a8)7L6m|0BMڹ T#-lg5wu{$CuרymckV81l/i|@ZfY/Z\NW{vtuA+3!I)./yc>u!H3LإcktZ?{WƱ b.}a|l9še?]`JbM'*bduc% H4*dV} \60rdF9 &1/T\jJ yʁ1\CGܢG)*Vy)SX+Pa 0E=B+)(ԵxwO`CO{J`pUtb? H ˥CRP5u42qPTI4??uMGskLOrΤ?Z),W`|Ъ38wMeyn—['a*0`T-?V4}U=L4YyW8\O/\, .ƹou׽Ɩ.Z<n/ ~aQ⪖)Ȑz2 גYf > 4]|8zw':;GvNQk*K:NGa&#KϯÛ2vRJltpY?k/ŘlyzYwPn'X.^y6p~]pw#0rR5ګ ހl1uŸZJ?~;7qU@b@:XK} П+E=em@~x(i7#MہFMzMv Jt]ۢȧ2fE>u6Ӱf.[U4k`%ZN#=zƅ2YфzfǎKfqYI:a|?E@s\V{= l 6JxȢl00±qG'DTZ Eg|XH>)&L4.DpCLDn,,xG H>k9x3<'| npH}v[҂A,sFh:5˜|Λ=;8h㐏L 2fѲEY0\Rᱝ*a Z  F0 Gce-i -5f}Da`Wr8&+2}ˉMk)/\o(T"xo ,ºp*G#%'@Z值cH­$RbP=eL#L)"4C@V(w(j9$$V8m0mc4"`c n7֜!̛}4u]Np10R'K,()kP %r#0V(JLr!?zsI!Hӑ!H"F`9ƒbFdP<HFZBJ:CVڀPG) p-٘e D&Kp ِ ZJFc&=j@![diSN4m CsJ<%6" ( H#lH\r'g;! U`yF~G~V9ftttĐL?:mb<8*eTzWA@ŕ,@I<CĂ9 ˭.":F *WTX/>.P*\(wp)BcFc3@cĜE'CVrnCJq I+y䘐h}~dM3bDoMmgw$G a$q۸0=pSހ_79- -AƒEmC^D/5ѓ\q6Q}F{୶;[;] \pp v}`a==#6o HKU9ᠪȑ^5wE$Rn[J[u_nמM[fe2qM@G(ļPʂrTITyʟ5C~(B!k LBXnj`ݕ LGux28XP0a)!K/Z+4&@g^]޾q~jl!q~<}]|vu}d ^-sMBNk4n2TCVpǻN5{髜 *195Ǹc\sCq1YRrk195Ǹ,k95Ǹ'j(c\_q195ǸI91%c\skq195B=)95c\skq195Ǹ>%q &y;Ĝd,!<|)cE%W=/~|>,ǃՄK*Ljd<)V2""&ZXQ4`hʘaub,ѩdz3~_"'gMz1@low#ANh5Z`(3*h2:ڀmTb0XJBԀDL,qy> O:bNtCedj /Y);RW 7e_)aFV /bE~<%CSl@k T2%SU}Jgؔg+ʣJϓ(I,cCL _/#%6"'rڞGu" h؅}E;?D'Y˔ PBD'0D=A\ gZ${}P;IkKZ3ػ7L-lJ͡$i TFCy=vJh I.)} )$ j~ @4:&ep̰4(Zɔ#`@u$@2Aq=V77* ILs"º {nvC V{. 9)iaQSc9rh 9ɥxBXYm Dp @Y0ѻr;i( |SD#\T ,H RV, l(vJxԀ5q`'Qp=斫EzGo&Gϒ9Y5GƅZIgnoJd; +̿5ԚT@.*3g-N"sk30CkjJO AS4$/gnnWV ا=E2پưX+Pa 0E=B+)(ԵxwO`CO{J`pUtb? H ˥CRË;PL|ŧ14>Oi|w)r~}ZNpv֙G+Y Z%) 9{zt[Qw&|}V2K FUJOǓ>?;緓酋e`18ͷ WR%#Qk0G~/ :J\RJ97Qב!d%,@h} F0bŇl ~wssm:dպO tTf2s`_7.vZJFqkXe3Rɖ u뎮śwoˋ]\~xw;O0K U XX[9]7_ˡSW/Q )Qb=]$@u"ò6 ]}Fi4Л@Z&{=e]&\Ct9]ȧt3#<}:tWEǝ_\+IMwHC*N\B8fF@GF;.e%ֆR-~mrCHP(ް@M c4GX $b$$ 4r3ў[XCCccRμRlOj;讧8AjOo6=tnйSȷ1Gwa +raw.0B)Q2PÔn3RfD d's[qŴYRu4>>\0s, &&i/9v3|%gS^b%c)sc~0֙-]؝O <@97ZQWpCh! u_&M֦V0#)6gF\sN1RiQ:J >WZmskArᙁyg"#!tAc,r~ml+$T΂#"ᷠoV]Aau12 "sDa4^vF B(͸ŒE*Lb-f;Q[ e*z*&k\ ?(|c7q @M%ʲ^]Jpv@_)-ofpMsIlדlǃ ]U۫ZWe}VfrwWa^d {Ug%\6h2sTBei Zf{ڿ]=j=1OiRuDx=f(*YŘ=_2p/kɟϿWY}SrW:(P 3ܽ /=MߞbVi<ퟘ5ïOau~߻-J8qaE`؇\gd8M\g0&lۂOOxcU{7ZܙTIxߝ'4lDm{UeC4{naH4{b=:wdj"n#JUgj!55̦ƸPu8>ƁeR=o7_x^uxcUS +}il;M!+Aoؔ B琳O a݉BJ(7$h*fSZU=nnn%9"~FÎ+sM= vY@u]HgF|Vb5/ ~E.6jۤ6Moۤ6餹MmR} ڤ6Moۤ6Moۤ67AoX٪fYcطa !'mȾ_`h='A-o4y;#sL(QmGs+:2,`, -J)'RHD#yr@} n\! =㣹FbUFlKvF< r(H2ʓq|ԆdTiKr,s_hFၱkT V0!I{&7 !1P 6l! $qIVbZ9Yn6*BG^fdl^HB ԟO7sʹlyۻfnr1;5=?%Mn>9H3MMnR7as-Ǹί'qJ128Gs-:2%{ݫuWysH>3A0ɵ r B($0f4=t:FY^1p k%Wvm^p9&$ۃT?/博eOlo B#A=!%u$mEn@0You^rÜ9Q [cI"!t^[sM)ҐHH쬅׋6M,['Rt~Q+>6MK!D!e!x0k5f,`ZFL&Z ihf1ѩ&Nʢ(0 [ћ}WeHjXF|0ar ˽)ׄ׺_lï~N8ǎvYgGK;w< #}HMe ٨Oc&GkQ:JH1 N9ťoW12#XP1g&6k/9ڵvMWmtw5ߪ7US+]ZPF VQk %.P!`V0/2ZDJGt`ZDRA5a6h Ұȅ4*Tpu4HP&KUkA c}9@AӄXZkuV!!: ĤH[XA`Ir,aKMXMвմVZMۧA)H z~XU[DmUEu‷=?ɾT=6O2h8BS&9SRA"-f*Dp H ɉ2՛QzpUuQ>dK*g+"a]\KA)Eݯ E|_%P0 *:m1kBJ!raó@8.1IP1o| =QOk60)?Q_%.GL9NJ .Gm)r2 EFjhM:%~l3?UO!F zw oٚHJgt5_:\TzF۹ Q~ހM Bu3qU;gOp4և^O@D ٜ豇SGNnֺgM5g!Lt$MA|4]~̒ٸ8WG{ 1ydP2G5XW0G߾_?|GL7}ϯ-JXf-975%i)vu)o&wҙ@:S+u+N|p[9QY$ށ,+߶"MM Mm1FM>X W75my^Al^wKȇtSLH=>("p__~Njn?-e3(&yƛ$nHI4B(U["0.q2&3+8v\2#k7I6LK^ϷS8=dqD%9Aā[vi [, 1pL4rS`Gkzj1X4 +Oόr23zڔaWK_" to%|\WL1Tu 8Xs, &Z2h/IYGH(򹖜N)bxЄ*yE9tGO2:L~hKmWnIn8˜ɕs u-:Q2Tm^å~:tJ cyDj^|q$R%ꕲRi :uVI/|~A*iEYPPͥGP od{t!@M+e"RyJ![9/?): a|.>:N|X/W6%U+\ߚ6'|ȔQ[U[Ǩ'܉cysT' 8 .#ɍ47PDӱX͠[v'Nǃ$ݟ_3Cir|bil Uf=)Ŀ0)Yކ/r )s0NQϙIfw3˃M$U m|1VG@(b_#zI͙Yn ?>5gmEͧG)=_6"4r;Lp RH&ʧۯ@/F\9{s ?Gd:9{ BU]ɭC["9?_A]oFW]ha`qw%$/{,i"K$όg)J(Dd2fu=~]]]U[j-6^볦TdfRlؼk8BqSDGRt^D0Lإ,vE+ >{MZm&(®?|ZV"ͳ9ZQRk~5[6ݙJнIz3:c(x]Gqhun˾swxk'!׭cV#c}y~THsZͺӞs>4 ?DWl9Yldfm5 ]fҊ#.p6 \mN.hwZ+_ )u6wyU9,>s<ږV<@Wwܘ=ةTa]9 47Xd\^|ؾƻ%2J9M{wUv裿>4P G!#*P2È5L NSF*/ȴ<դ\峩5.{NtirK*Ljd<)V2""F.h#2&"D-V_|3ER hv4~*}E3x^9}٠|fg;37&NcF1ZleF͝QFG `@ KIwPiBژ zJU'VuVo'pѲ.K^ y_UBU5\P9H}g_&IfR1ٙ6>fَd{ W-V{GkXtFӎ2/;i]iH{-a6v٧?l`+D(t2 ,%& [ۂدvM^@uIaȴE%<`#8:i|@DQ49Kp3"ا SDc%LK)9 Q9;56!,ɧ@R_Hui?FؼS+z<^uuÏ玑_yAڃ^qG ba)bȠe𹳒a.b$*%чD.ꕷ-{ݹ-&mt_ˏ9cAC{ كoͅ7i6_Ԍ|ښb 7d11yR+U>=eZ뜙 S1G EέRȰ`;X+\$JH!ftX_EEFb_O]h$u.Ae@7̓/k]e/!MQ$qu>)y~dVD\/MVwLo:aQQ)Ltc`(],cr9^C`vW|)>i& ߇yFF*{M4ϸXfTrӣ|7] =iBŃ-OTWgfвQoby5Z[PL)1YN"Ǒy90Pɲ,T*94&I^)'n;:ծJPԬ?qhOy 4lx8r5؄lNKUoaϔ:hz9~q$goEhmL\rR;Oɘ23 QxTI«s̴&ΘhM㣺+Ku%׬p+ XQƄP9”a:(" l\TNBFO88T+6^ v@[J!"'^q< ןhl:Ǒ=7dOн_"!HQ =O]*\|;DrW+uj#J@Xx/5~tB (<\4+HXG}&L͕j veZ4^jF5+Mf.5z4%*rH(4Fr𠄤T3%3؄K dKZP8. å[*&%2U јI&rU*l2v]Ix%9 J 8" 0Q(+GpH\r':vE:} ]BPJܭb _JQ>xp >!s> r6?O>v;aoHXAxK˳=rRZMlщEY X -H8˅$TK޷wE%emJ*a_/jx@`އbAP Pb/a`\w'\aѲԗڑ]~Z`9mwG/QÌA.mm).%7SaWӕ%-\JGnʬv(h=-i$l_kUDbp֑~N7T%"1M%27PfCˋ 1iЗS"^{:M(myj9cs>@Kx#.f)a:˒+-V4zw\5em77G( ?hfI#o+n.%`(3u]Qra65AzBH kFaGp! ݣH#&P+Ǽ@QJ,W>l^1[`63>NSݠ V'}lJa^iGjd0;^sXe7'Sz O'}Z)AZt\9Zl!oR2ӮzzԲkAF 㟋Y/|["> ŠmR6^;˴Ѝwxy:~ ?Nu83va瓙u~ۗ%wvK:N#a sj;21c3 ~,t.a9%? '2 nPLgF#s0*}$cj>7xDeInHK:ϟf`07wZ2z cfu< FDL 6c2H >Cq7A l< mf`<^| |%a 6U?["#w't$'M qW,"XQq~YIT 5<|ghr@kB=zT{ҭETO$m̌/aј?49-2͘A6ƒ E ,={n[ x.89&r6ȹM"Ew9lVJH/B"Iވut0JQÙC ĉh/'{wtK42ڣ\6AV >7<Ȝx3 ~ϫdxԃopр;.7Jh,|׿7E2ߎqk5w4ذq+~#m&x ni).5Dy`%/ ly{;y=%O'r4^;?F;}o'LG eZ):`ZFL&Z i8ywgb<Ӷ -ѕ&%,^ m46E55B`"n0^ xjC [80$5AZX,@^vGE#,$S`oR"嚐J7i6}y pw=ůnm gdJ鴢tMd/=weܨߘ %ZR>RL@SNq飶ěU`̙{qvԮՌһ͟Sv_or$|w}]0usN15NɴKMxigvFu~)|?㗏o~zP7(0)3MAGOO.a _4sz~K27W|,{G[ } R//Yf_\sԑEGA~~oDӦfC3&b}5M>r͸[WPBB47bT'SމWپ`VhM/NpR<),L"H`MJoyg8"sIn4S[z_GCcVC!TM(8s)sP=Tuij_>_M3U" 3mueG},CT38mȯ AFL懿#5= Aܴ\~m}Ĕ7VtV.õ /{.WD BL(0cV1AHAXb}FN~_@ 8_~Ւx.g% HP@F9~ S09?,տ> .a:82z(IEKDa{,1˄3I lS%3sƏ"Xd0L߆ќ&/ u^`)I>Rt|;v~}wj?xXz ? M ώ~0}AB 1E1 smqTΝ4Q2H@x1{ 5ߗ|^U!t>}3zКb~}{xqqCHBœ$cqA!! MSU_~iws[Vέls?~f;ˋm9:a=Zy)ݍWg؊Ipz;N׈Gːc"I0`nF[[Kɵ)y@{J˜O(O`I !"Jd9.{+1Al Q<1C*&HȆ ՠ`jBM& c,a$Mg=\&-7̕Zw_=yhvyGc=I`K:NyZ <8\Nq"d{4gvխ6"z{ ;i#5EwiAC6NfŮ7mkɾ!Om=ފkbڐ RV ?~bϡ΄ 1a=36p*)c',`ɹ{ZDS*0K8;~9pK%XM#VP&B1irSO **qPOtc>--<d]A;]ti ?ZGEVk˥ݹ5}Ɠ:E1 ~}qDn㭿dP}sP]76=L9KFwfIrI6!7yod]@;-~ͽ-vיQ.[ 4ۑ\>\ f+5˺YZxbZ6;WwjbKIkIm6 BL{Lն ejt |wjSrk.Vm*=@}0ZoqGRVIm1WyJUt -<[S^x ŽeFrzwlPst[ U <%?!*ojay ocJ)l<햁fq5`ŽyYXI4NZP|az}'f@yf(hw' 9N~ryڕ.=#́9IUk7X1o9±y=guIدY 9nujP{ISگPaz@9_(`7L!AH:`E;dh:=tJN-4rk.$e< $FpD1Υ {ʵ&$= 2I: r?i)'aAQ'AZz8n|!ZcJ5-l 4 Fn0:+>{vF+?-Lz4r*y5?D^kϷ -Nn/3%&2#4]S^j=NXIppR3tJfe-q^AZI9IIV$,),q뤽"j4K3<`XT%Fl؄8. C3KJ; [(*tKŜJdm24D :#v irU2*nS1ű00 &F5# $W̳Z;!G]-;=AK[,kZ_^-Πx$wƐZv|E q1IMQVD2Vk9Em\p-"`HڦhU7_*PJSdX!)Sl!8 hdN$SF{$N`,78XU#Ogsyk ]Ы.N[N&8|JOvűEGGBGP`(8GlL/Q7@Z84 .'yJ)B 'A tuO%w$%zGzh-zW}ݝB6L헇X@zNy,H]ښV;Xocz H/ƥ(F V8 owFFg\/fy0^֮?ݚ :Ui|%MY *Fj&&8a.x!GBDX &g F8%`JgURy9 +XtD&H߂BV 5N־ >]ƞ5fߎ:gn\oPVeO!b 5)U.&dr\* U !9:$F(VkB2N[$YB Y{)C+]~8[7޻ӛUfO {ŤSf 3|be4&I5A%@XDc#&tAv}AT;- )e= )DsV% pDmT6!aw\,2dhTj",(NS$<(baY{8+}6#!88wmD[OkdW=3$GG#Ki$vW׫rY$53RW^'-a  E ѫz!G]Ѱ -x't9Ous8ˉ2g2Nsc рH3vuMfȱݤvvj1fPvN~,Vv|Kc>!ۛ.-z;9k.Ԉ=6>m1.-Fy;?LF[[<<xVo؛p>ڪo)d-m+3YΞNs l?.4lgA)(KJ:u>P"`d) AS&@YN>jɸ& qg`Vvu>&g;Hu?M@%S]ș<7>bBby-]Рް`;]g}ӌ# wW_. d2{J)o0ҰsSBKxOrIHd$VN!i-}*:R")9IEdQOafXTqLſ! `~mt(<'C&e`_aύNcdJpE$ %-l1zj,[ dfL10ʂB@a G"i悦q\[Aڦ-h 1wQrT:Ct1QӔ}qO6NsIh"[^lmg)ThWc,WgKp)z,G |1r1]WYLpiw<%?+`$/ ^y(E}v͑L1)l;  D#L9RP;4i8 az WE-`MHBtHp\=bx]*K>|KcƦP"ٞ-~>>>ԧ1p>V\hu4L]`7e&9g2+kS~-Bu?O^gg'ul 00JRm-H G7Yq}D 1fq&t4 iFir BpT'`bG՜ũtE6ڴb549FOC6YRGg<__o A/`a8swz/O^z>}W/O{ɏ'Wa00 .kA=7 wKe\ٛC?-0-g3HhOWꧭ8 g~y̢KZ$-77)HWS|b-֨UWlf^yyݹ/T"AtG 8*o L' nfg 5qT,j.X<ЇA߅对OEq C;n9P =&1@erW*@njrD9)bЄ։h#Y%ST['-WV !Dz_jN y{'N[uN389btg͊9rޫI¡~ɷUCsBޥ5&ҡg=D S"Hf;v1*|RA8Dj,b_E(מI\kucɷ}əFo> A=tԶtz+gE!cxEc[%xa3|%gS]ѯGy}ǃ'(~Awأ]c")[ǵ:4xtjn{wap 6Wb[y2d\ޘ oB5.W Vˆ^FcD_BXYL^jʈhA@/h#ݕ1ΊuP v\:O7\,4[o> ^|vVSE7t"@QAsgт1,%! j@C]c8)a2^Ka9^Wh6N9eS6 qЫH}v4 S x>("8qO "1,NFߨqpTJ5VNXڔv^JVB.OPF]0eM穽VQf-/ Cq cbJ=76_ Z#0iV5aV%&wAV Mʍbn;& LaFa]@?t r6 O&.~/wXè̯?S`JΤ;@+Z ֗r `HmtxGH}H/OFz+0WRe jR Fq"k Вz/?h|ErLk|r%\ЈEkCX3((dQ7ip.d&{{lJw.lAH0 oK (XI -5J{( 9$`o6c <3tny$W萘#Ect Kȓ2-=gqp7(EJ7aR@&] .EK!MWgyOƃ5H Rh+LqK+Qzh%'xK V)&ɲM)պA~Vܲ'?^bOҩ$Mk00Avu)Baaܘ]%E?}Xϭgie-X˽K˽ܫg.glRWwRߔDdIYH틫:Nv|kŠBo FҁzQw^x_nN5_hvpLR(:"K߱|[\*A ZX2M5Ncэ/6r @b g }Yo;.bl&@)|6u e RV" A.+v\u@K>NT%k-QuIM2uY+߾FKJ ELPhPQ(Æ2x~lUqYgA[h&Lw:*UXV&|`Qx͂VL" ^**cRGY bsxKÖ+o\2E^[؀юGT ZO$ (m;Eքl Er#{;E*L<0@sRHb΂wf!ZF<.!CB*Z π ,k1:ͭFhjM]9]\j9w*C#<<ȘS ` EA&(/m  d 6fNrGb,:YiS !@9e]ָ%a뱄 P f%0\qk.؀ql]Pcͤ3X,* ptsF5YS@ظ2 kMšbq׮í~֚cf r5Z n `LL9U}G%o=+H#t9p7$%<%:`O~rXr* |gЍ $:Y]S:0dj*EϠhHyx@uoޮdq5ա*4OV4uSjCxr7p$r-I*aX0p\ 1L*8FJ'd$U#1X2'Yg?1[.բVAҘD$b9R (h7gIA ywŲs78ՌlIҖ7wY~Fz+zG-P^){GEaqԋBw6e:(O'Ba J N,9 ͏?n `SB94)ZNGrnΑY;w <`#G7]I>J?Z>S4jk;G!AO H?z追5VED $J?J DT-Wz` S=kD $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(N #{HJ (`ndU|J ˄V@_轡wNoN۪/&ο8zu(N={u|,ͫ;D] A=Rce7st~} -OCG꛿ѓG =]0=1DW@OQJP(D% (Q@JP(D% (Q@JP(D% (Q@JP(D% (Q@JP(D% (Q@JP(D*Ist3QPr@Q@}Eg!yP%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QUE'{8J F k{J Xi(F%PT1ə@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $JH zT`^=񌗚]/￁պ:C2! asM8u `e"\K>mtW'@ \m]vV63]t-~[Xu>_r񕀏$t-ejȎO@xAAc7F'^Z6uMcJ0L0ACrT~T~܌,O4bRncbJp~}(ǔ M .]SqύD`tflևBڇ&S3_5뭡^VڧG|QqG,X??` <7{_ru~܃a~-zikl/2*^n\Re}z~xwhZ7Gm?M*P{˚PdqMѪJʖPZ 6룿<_bcK0NΖ }j#n];._/I]x 7maqexѿ~oߎqn4u'q1q\LŴHV|J2}=$.PL-\P,Su( 3Y:rn&qZ!eGud>\(L dQL6e 4!:82 KVk]iXgʄʈuGZi)n>S8>|| [5*y:`?̸̼^_ps^LNg߂TLvTr1B%PK,H.9E(z2vG uB=4`}}fl=Nx-MoګNuÇ^M+sH>e2mR6/lK6֞|5Ԡhe(fz095)=cҟ iq"7?k.=Go[d=, m Yu1CL={:]ڋo~r5N7m-'3W7۱~\`j(fJ] _u#06|~nYrƛ>ƘpP>^U/^U`7\%bcŪ.Qס^l~Wgg'g6//Wբ^^߽kolI ̳7wSߞV}Wn.b}rf[=^k=\TX7 %.wiՊH!&(8?o̱،29ΕD2dʈ4 Y܈rÙa5SwChv)3Ϋe`LKr1X U X̑#曫hD UwEt>r`] $I bP}9$ǧ)Oʯ>}'o߾[$w3w_Ч~rl?(!ye?<1+ė?Sn gV~/g<~ǭXP*'RCOuD( w:?2jsPkpfUunTG5{.4P%fT$υ=5EU |A;xK|\I*9k{" >|Zԍ+u2OQk)V0 wZh{(Gd(^&FJj =#iy W,nS佐mvvOzDa.!yqO$1YCuְz 7VsU#?~ ;0N0NJ]^[us`}v]NBW6䒩0}c.͠T(l rX]Tʼn9:z5Q>tJB} mTZ/>5V ֗*yk-Q i6XD3A0ɵ .Y˒cFc3@clJp ]k%"i  ,c~77StS~vA,ԗW򻱛w4[}/][}UކVɽ]i|='k>vmr+^;_bW%nd,/lƌt 3 gZu %_ b{w͡t:yPBk,*; d;|N8aWu\OkpuBtp\AqUApV9 7϶"<,@sX{}D[~NdJR;{UC>Ɏw X7wCڠ,X,\[xA[etJx+Rz>/ʂ65|[WQfoOm]4tytV x$ )[XҎL"^ˈiDk45[_m%T]:Ψ> Lz ->?z#qLq6v m0V d]vKeUN)yT@6Wj5d>ZRT_(#t+C\-&x#32x5sf"ha-Q6FԮYF{ܥ2B\Rٯ͓?.^of { zQk %i]B:$h) Ӂi+5` lցf@Y1 )fŞ r)OrZPBXGmcC5lXcHtX@MLXG!1z ",iP%LR#1sj:Vl4ѴFӎiaGPYruy%jUu.ϓad{; m"'k̑8rdsh&>;t[(]fXeVSqyU=OܝW 7j7u&t:fwT=م>T>֮w~كI@h{թ{U U.ǟJg۫̋l7#z]}6 d%6k5 |hJn`wcG -OHo8poa[`Ds:i}mnvb^= ;U.g}^Znm~*ۜJL/q4u_hD8zɹ3ҔӵaGFHaJW6&2)`^gLI ?0όO)1 bORFΜn\oF ]wg!{2\||lm³1ͥ'~1D=A\ gQΉ^DTFfruQvSZj4gGI=47(4{+^RA"Sii70UHFǤ"k03,*͸J&>E^:Z}V m6:u(a-4(lA&8::]ArN Rrn@?7|mg׆:1o`w1iP( rSD#\T FUM[AbکswbQt|! DmDL5\3/"oz>Kv{~}iU_$'kc@#PV J1umSo{/b `z:'4snSK֯_?bL-z=MCB)r:_OVꦭ8)r[~?@1i&iiY(RW޴iMkT'bO!WvY%D/)S֑j|Pƻ"p__1Ů*Pn/}>ZF/悍$J4B(U[<0.q2&'6dFIm.Y>p)]-$BGTv,יagDL5~:T|6eחe t0bQG"`"RSFDD b0<)c"ҵ-̧d l);~a}4>r$hΦY|fGw2 7NF4l+E 62(-*1,%! j@ Cdݸ\*`9{vI:mf8>#P DFAn(5.IQF)5;nj Xw퍂4Wn\;]Ů񄯀Eg70Of !-EVGCZK/iO[ 3 vLV f o!`k!śoi蠴B 0ԺȢl00±qG'@6֛_Kp3"'Ä)%nŜ(^BI/knyx9"'fMy^ Y ߝ:i!92bey! JtԝDB$QZw>\g2JKb1f1i,%fm7i xϋNB]^:OEkbm2Ϙ޿7=udއw>͝sTћFoRovz6FAVөMoҁ6V͉]ݣD9ʅ8Oykr򼉰_m, α(uF,Q0Fj(:JnII\Jw@?'Bd㍄X' bkMgV}K!] Z?z : Θ luB;NdLI ?0όO1 T!q̃脜Atrv >v<3~Q};3ݪhI^yBC`'șxQΉ^DTF.Zh+NψS#,3)IGJU吆}{Zt_' E$2+4Ddpy^J0܆4VifC?a+ o0*04Iָxr1ŸGwنHJ7_yuosK|nD|8*]&BU-qU[W5CQEa JC>('4i佗D6hTtUV10E ,?{WƑ;۟ҟ_}_~~Ϙϯo_'XQp<8ߘNva+3&ʸZZJ?\o:C78XH[} R'mI{uqΰpE ނտ"m=7~4UlGk[7lsu<}:y QS 'zd>>P@)E%w" hqC`? #ZN#`REɃKB,HhB=c%3l$0.e%>?pyU#*Q,_ {V5)qHebH(H1I@9|xb|@}Tz6 4Jw3D/ͼt_zW/p `c̱r4򚴗-# dRKJ1FۏzP{fu Aa^3Zi 0,rɢ;bRDת;yCh4BM&PfTet`E 5 T!ӲFr ~g=98/:wf_vxMQ٘ lYɲ[q( z`W)[*$TɂĔ#"o}+7qO\@\A\YƓ6# `SCE|&YqG'DTH&"׊iV Abqr5ߺ|k\7$_MV|`] "Zם 60IYd$%vUYx[=ml9呦ܳVF;1moc%Gf=:86iqki%@7o[imJ/ļTʒ(Ue6Js@crEi !{=^it(z;'%FO- K9DŽRn&x92vEw9lV &JH!m[ b ]O$Z ? {Ϯqx[k> %v8O{3* VL 68je+-y :COE…*_.$6`*{}f-~:^ɲWý۴hPr74}^gR*'?k|^_tz?nqU݆ߌa-ngwE5 I!0v =|՝T8N|?=Tm6GI` )y (Y2* \NY8Ж!z=8klpxnbn1VٽG PKcjr]Ej SwU")ᮟ[}hdکlĔn5\NӾgfJer͑^妘7kwhwydrY,Hf,:K:ݳ`eرP,lemqIAニQrf\Y=$M\rvvII% op?LR3n f\e$4t{Ev2cy07\21{1,,đjn$딶W@ͨ=IX2 o3cli{욛o);B::k]5}$:]R&;P E sa{ے38܀jcdXwE6b4RCQrM* ;a_]QkZȻV0#*)dFR˴&̈́1Fi{9.ʘonwpd27یDz{HkcBɕzjWr~1j"`,+A@gI1h+1oS'@6\kvD(+w#E\DyAVֈRb٪lk0g䬐c;W:O IUpXRƌƀg>LLjS'PvB=xϗ)5k~y8eFx+ )߾#s5J6,M% J6z sZD)Z2li%%Y3ױ]yخ_lg/lH%4\`|QJg8D)U `h‚"􉎦/rYVjC~_s_i7if /#=93H O׻V'vj;,ZlX#r)_@cfK7B2gKNɶ^{u&'.vzfz:]߲raaxz]5Pis8+òd2NKc рEo)I)9&m%is; aFHi\ZCa1o I<51FK D!xD:*E[ G@Zg ݂j΄Nmv.["VHԕW&UeK s1aEbժ]{ʗ̷h0yjb~ ]^bx~l&_27ߛnSo e9~œ/+ 1J /O՗fO#3+-͕Drb7ZsZ5O$ &'՟S4=\yk#,E.:mp m}R lH~xf3jŖaa\T D CĉqGeFɋr U"2\HzS>6CZnz'b n vWFݾYTrg͝t-z P~7*"SeXRA/>ɿX+Zw%δ8+ O\/k(U*I7%TlQR2.w5nGa;>4GՇ[^}~N|ʔdR[,@(cBa0P{6XE.i*( !v:7 #$86^ z@[EVıH1WOp˾Y!-{=LWn,-/O!!m?‹7fdK/0~x! ^WoޜuT\u5*VI \ti@tɧ h`BOhuJC}w]seGe?ꁖ$wϸ.w]s>w]Q\kQS&U}nU[VUnUh4U501bB <* 8CvfzH_L[bއy= ø!)qCfQ*bTH;XX<"2 7PNroDʵl6"Ǖᒎ:t F9 ZЦ@N?Σj|CđG2HyD ģ(|rK%bdF/cLõq+Gy=kbhr +'!'f4 {2e,Ý|bΒ%-K!%%B"vY8 p[='[NH.e8HEOBV)YApZkx-rVŁ*dUYky5m FHi\ZCa1Xʔxjbb#BtTV# t\ dڶo=:rZg({wݞV֩bzQܙm:z s1xہ֡uԺ[o9܅>o݌vjVBZ(wz^4=/s/-ּB[i;:& FâqKނS3ggM-)Vpџ|2N&{eels&;#ugԝ3RwkH;KJF(;#ugԝe-#ugHjÄz۪͟+عOUk,pg 2WT_Ob * B /lPqx×*f] BbHu (PV)$ 8qIVbZ9Rv[cR8;ĬW逰 95 $?o&\yULR8zʸa}'0eZuYL b 5`4P&,%b,EW ˏi}K xoڻ ~į`R0%5|ü0>bb_)xvݛZΝz^% C_9*gK"Nx82%ҹT '`q#gQΉ^DTFrP{Cs>X!]w^J'G(IGʍ吆s{Zc}K0p_DB #r ICK=#`'HwD@4:&ɭ03,*͸J<Ai!-{<։FTu(a-4(lA&8::]ArN RrnL U?ٝ\t.*c)N@a G"i悦q\gAjWSv 1wQrT*`ĠGMS- 6xqK G1r%79']|Һ}ָHZ~&?믹<KR0F@6T@. ~)M\v~} =ЙS3ҋ+g|Ꮍ}}_vYF"rq.Swa2 lE9 pp>TÀCJsTt@t[]MCWp0#Ч*rн _:wDk@ew2[=8+.Neߣ 9L#5_8|s&UL_-Ԩo*?tV4mݩZU?z~r3ba2so뛚T/h$ GO*p~so j)y74h4p0ևڟ _f.cɶũ}"FmZk\YZcH}4v2tF.QsGrCxް~q`ǿvt?K߿/|+L}{7p(0MSSxLX̜oi)7CcSg(U2Gi)p u 0szegD_@G?#547*)3ns }}\|\7ȇ)H~P#('.e<ez{G3H9c#ɦ*c;P$!JV'.! e #JGF;.e#ކRT=HP(a`+NK8X$2lH $b$$ uZ arw2hjm3Ԗm=ηqenMU*TNNSAn{_:שaW1NJ%hRR0|%gS4a'/x̖}Trsڻ^sIFǁ> GV L&b_6 E7jC)Mv&.`6.:տ"Wwv84lP_79^5LNOC!jI_Ԏ)KGiV^GKO^]|Zdy7-(S< Qۡ`]_ jAS2C2|ȉ‡h\c&Ȣ(RI FP|*6J5fK,3(uF,B0#BXY HjʈhA` FрG!eLDe\k<w/t>?X'wn{N@O_[4gP2c,#%sdf̂zfYg m+ V# ߱LTz05WN:JwMIGjjs 4L$[:mLO=݆INi[snN8#)mIr~sm{hy9Ud̈́) o<2qRYn }9Jg Zдၧ䆽 O TAN{Le&Hc|28XudiWUZ:$N\kJ{><̴2G'-9NZ~ryY_m, α(uF,Q0Fj(:JnIv/ly엘"XT5xfe 6*RjU'}hRHW^At3+txbä5iZڂ- m|,@c2a72ηZlcs؝c]%j.Yz%MKH\ ,.!aZ([u&8k}>Q1~e th=`>}&:0w]5B"8e )`4P&,%bW}g~1OGbo5Kw[Uo (7Q nz@vQn< !n?7q~֠kj2?,9A%%_:==MUBᖉGdTqu0GĊi0Ѷ-`" x/e3ЋjǽxeaȴE%<`#8:i|@DThFHđ`ONs^[ "y3_4;Sۯxí6ȑ@,##E  >wV2E $,U}A,Թ-u7N/O >h+޿ mvO)Y*j1k y8G oƧfp43EiW"'W (׹*uh6@V<ճ1aGkD'~0DA9@ss dV__}.rZa4+hU'&9<}4t%MK%HS,2(4{+\R"SH"wK_ʵ̑kf,HFǤ" 2JaQiƝVc  B2~H&v'1cP_#`c~ֽpٸQZ8i Q"LptwE$ %-1zj,[ $e2hG$j121PBƍQA0N4sAS\x +HϹO;a(vJAפ! DmDL5\I{(>K.*kmj$^- ?\v~ Οߔ0)R0F@6T@. ~~7 s3&oCgJ J/4p޵,"bv/l&g23`reY$bV,_(eQvb*_{(bw *Sb~v1F"rtΤSwd2\Q9xqFh*aUi`M@ZX.*'l!_ü i& Y"cy1``V^/zpi*%R^oM:99]\/_*0p>V\hZ?>Ri6 ^:q bè*σ[Դ͓~\uw>T7~7-6켞[ovI4Oƿ^ͪw~[PH\#] CڇcgfY>0T f0bZ'|˛1٪SGQ6j۳2L040dRWeL c|z)dEx⏺Ḿ=ps`ǿ::}O??)&;GXq30bH|k~ߜVf-q5~zDZiXt=?pX }Z.Iz_.*(r.ڦE-ބ߈"] ͇m0F]>u|q~w\@u1 Qsڈ)zTP@o O;q9Kn]iF;Ё{["$k;P$!JV'.! e # # ⲑކv)*GCX D"G98- [֤pZ:"qaEڸ@B!FIN':J\O<1qF}+fkcx;U9~i?KdTw| &tjt4xR1NJr;+^&Q -9+RfZfx>&2RvtՏ{I9^ER6 GIсehYJ3`ÈS]wӜQYx\&\LŅ?wx28 È/5L 6g=;޻SO]~ -T4:MC֑zMb0e\ T[%LzEwVwx3ٞYl8hy` )`X9VF(4\hǎX%u*Nc3Z@ 4wF-yQf$;A Utt,^Y/@,mk/|~ϲ;#Rh®Ȗ,K0,I>'I>'IsR팴v@jϞ7'W)=0k|[zIm$9qNr$9qNr$/%sƏM:=JW1X1pǍ6sLb^@eA+FDيa9larBEi !{=it(z;'$FOpr 8M(rneZG؝1rbMH!]cYAC4 LF-1Wխ"y}gw qW5Լ>2H.A̫pTMvN{;xNt4IXI7:-Bst `17 +_pGPCw)w>:g''Ӟd;LKts~5nT&ggI9PٽK PӡKs:;vP rS$#ogg~jz_qCŸ[hFg4%y'. X .^6 ?\n^,,7Ynz)7;}ܤ> ]letZlW 8̝oVW {Jnn ɘdyNXàмBD"%z IhYz$c=#}ګN ο6+Awn/j޽PpeK9\,vQI6J!^ؠLaLT9~o =!$ႀJȀsj%F^Pc^r^UC39˻^Hz-}i?45|l)vߤ*TvqhO#F;CGLQJ6KE]Pqg$! Xhb҅Vpb#VQҋ*]yje7tYLLrK B($0f4=d:F2z4Q+Z>ǽZP}MzA ɎaoQ:?Bf_ܺߗʹ-# V e&9z\u02ZDe&*)2_aL)z~,-eaFô`1? (R`K  , eggֈ]sv-%4pCRBԄEK-:ҦFBET:)rW YϞo}E}jhx̖ڞ,5)Tn? ׌,54,/);@?V鈜du,ИBL{g3Q~8ֹLJc/<.1` j! rM.^̻~icoj1Z^_}vdd&#QHYu2LwZ D佖豉hj4BZ":+r:52w:Y&`5VpEH" .Fa\^TLj#BX<Ǭ38FZX,@(vGE`.LʩR`?ZH#̺"gie%wUS_^%yHg|!!O4FO/'6U׷U`;v6-`SؤvKŪT+w^M(֢)%bZ!8>jK ^Ȍ ^`A Ɯu¿ލ3yTg]b1'ѼN`=)`4P&,%l?ч-SlntMZvp@|'t4D{R0Q9 ZDl ha[0̀#(xAc{=m$D9[zۖ@';jT*E 6<+-CFF9ӜP%9B`(%rIe \J0 3r@*DZ{-QIz9Jl_?msaI2x&+$Y?!O8=At ɬna6>;h脀$Әo?djqKg58rۯ~ QIPԴxYM?+Ml82^4co'~ ĒWcU{*::CkN?4gI9rQ_˝hq.`9V̦~﹨G@iakL|_9[R$uAexy 5F)o P.姷o;kH)EВ)E 0lü3/tpmű`e-zW 6V0#*(`FB 6;˜܊M'ĦO(ѳ.t,>Ҝ}7tł(Ȣ5"c$r1?&QKY-D%S٢y 1y@  r%ӇZEpURguł, ]eĔ??˰x n`87Xւ?{F0/lۼT`r2;yb_6"i %$~eيպe:HdYů. H9J̼zJ`n*GJdCPR'X 2Z@_G˛Ag+31c^7|~ 7Mҷ7$V/:Iҏ ۹ݎKrVL1"爐|bt6|.#E BF̉XYYl!t` T6 lZPa!,Z--P~ f67)$e.23[%X|MPtݮ. :ꂎ. :ꂎ. :nأ7ns.􎺠. :ꂎ. :4K4j\i Dh0+7DMAu5:& k'}C:ZRXA6!҇gQgyQ_V|g͗~0Z*dFiX41R,!L12HWj_Th[ (>VEC>$,KJR%pFJ sROkY`jR+^}`( AwjOO%wo.N)~'ͭt~Bk2BlbDe/6h8E`H8KPsд++DZ+CR)k/ 6֘M eT$/,+ug0CIHJq 䄕TI1 d,#RZ=f /-9g)g$&74C@P*_=x1c CY./l9*=8W9 -!r !z|V/hꦽNI,JUbX˜#ВIH1"Ӽ^C:YxABd5:; g\;YSЕ~Uh:pu}ǴC; ۦEjqctDzVCX{Q18Iʸh2fB)*hHF[Zٮ3B 'Hg!,:f =7NTI)]lىZwH#iÇh{"z$,3pEy4i&u8}(LYscgǘΩm!(P&(Il+QWM$&m>1܆lw*{B@[ %cD eR$BAhl9 ̰F? t3;`ξ 37i8y.Ai{Q]Ry|]]3?񚐪Iw<3߽]ѧ_b]θoKAև\1j(zMr[?OdE7VҌoC7Sz)0*@a[FxkZ}GXrZgۤ ғLxf^A;m}c_Z /Z^/Ap\wPn~֡ wsFpr4yf8,v<4&AF2Su*!JrIiNJ2Z <&SRvIdh7('fn.8:;]Z1\MΪD-\E<$O;\j<0F }w o}FG_IZԅ:egNB< yg+c9 dz%wVDUL|yNM>.Ly (3|(luF^bN:DĘ"JYu/ӿ#2>8"c!]?(Oo59]Rafz_gPNk4nQH9ЧCһ 3%2b~\|Zfz궘F̗wֺ\yjg6\:q^0>w癮/.F8sc~ىWo#ZߎEA|>ߎ풓7 (;ABj ʖobU3ju3NlfyeMy8':u _=mџ>pn)t\gԠNj!/d6<}6S$FWƩ<<TJM ωoOy/_~?߾~~^*y׼G`r]x8nvfFφVw߮>9"7s[}:/wku/- p1'OzM{ oj[4Ѵm>4X]#ho5 Toڥ4S!NɍYf <'s}KuaE˖GIzFRm"/DQNW)@A)$W:A4( -3vmXhf&>_p{ Qh+8-d%akҐ iQ PY*]P$O.fc[s{|myoQoJU]~,lLj8 D]X*J< DFt)!ֈWS? yEξ/Af:Q'ˆ_MQY5EjEh90{;2bT {NnxV_N#'A跼)FO#_ZF,zr),=Y9粿t-)=8l.zEfRa`uP:3e:fm10Ʈָ՗)vIRJ(PE'JFl) P}hrBEPNw ,t1=^2E*ok%aH-GZΚѱq ԰@<+{\x?%*)5jz>z?N/v HeAb6l 9Fd2E ͗POTtFZSU &/!^JO!k.{/3z*dt`*$BZwGqw hZVܜ;Z~#R"HEZRV$TP<#q$ۋTI9gtu"{)^?lኲ$˗<`JMAw̛GmgɾhݾIÃI}2Ԃ#"emֻ@;+:+C4id بjeR er,OɛDd `TRUir1b/ ۝I Qz*F5V6Fl;cD d"$٩5:\]"xZϡCA=ALtwA,m\WwT>O_[.,DDx`P#BQln l -}c~ 4Xyhm<̙l/]2j %[/~ݒ,jIִ,i]H6UXMB\ m%~zcܠs}j,_A> ?y_gl>}FCًY5w7w3l&Iv^[\~,"7EnNRn |Ut|a{8_[_[#ZXbs5jlN| lw5YsŲ.!WG"o,}2 SǔucKNjI6b/s'OBC+7st Ь qR RDMyy ͗YM;B:wtR׾kdC{r³ Vqˎᄷ'I@.O;Wp kTms"R™ 6s0%F'& S3rc9&GtW,k2)^IXa)l5VSj [Ma)l5| RQNe gS9wsɩ{ s*^eNeYȵ A!*Z\krBUȵEܛE,bZr6 V!*Z\kBTȵ Vrԥkrrkr"E0 V!*BQȵ V!*ZErBUȵ V!*Z\kUȵ V!*Z\krBUȵJ<%k}T8 =s'AD*ιYRg ѾҫcNŷ _|Ewuaّ:k4!&"o$с>i@"NWۮ]\C=/8x @40 wq05 >_`W@t1<6r'^}n~ߌ'+B6Nי?~lt_k m Bb[֑pb`H rB%τJim)+"@Bu@D墶0ZD)+hFc)rjYM=)ҪHq!)Jh5RT);9rwvy^QH'ώjhMkW"6b-âű##!CGLԓQF6KU˽Vix 7$RD0di MS9L`s*ꂒ)ˉu~LxšNy:ifO^Zs$dg;:T %G#Ұ;L!5Jn>ev0lǧnM`9m *u 򔓯D&T`wX;!8ñ`x;I;GgiEz=?'wwLWOH|19.(Ebh18o3 RHF%S69k5IuvI䵬?qh2$p0D:)0G>%i}tAJ\G3) (U#((-ǜ6 k T:RDvWo%/y]9㩜m{z> ?M]d_f icḳʭ,+MM9z>;N.^9H)ڰ"3(T =Qr(X2^5*#9Kj UOkB2N AR-lB%շf;5fr]Wʞu(tj "C8lR[֏E<@eGѰ~[`if#9Yh\*i'!(fJhmcv>lgY`E|K KB*F h2y SKqPu!@|Qû -[83܊m~\Q6o*!x[]'}CL cDzV9+bdd" X48+&\^qdQe>|^ cw~.Ax$w pa܁n}s/Ϟ^ph9T**5a42w$;qɅa/[]M8MyRx|NP *Ts4IAxυ`FBIxN.Q L~iVGԄQQ|ӣǓ棋%cdF8zVG&μ"YvQ9oW#w)'Fe; /q/o՗.z_e27n?\q7y3V.5^vq~Ɋ&60 |Ut!pA51E%vqG.ra:޺E=Ex{GݛԒD 87#٘Dh{m?r3(ʏĆGs㥻;(D2k9wOKřOF_NGߩ69壙 xי@}<{YA[<UΏ͍مlMgO|k i4nǖ]r ƓO&y~_l nahd6O8y*>]?9N7=~8F6:ȮV, i\Fr&>WP$ ڸt0i;J)e*8'~m+?'TG~ȟ??2}q4 m2ׯ¿vˁ43 e -?=|y?ݜ[O y ̿ pfփy>aQ+ŀOjۛ75д!}>lҮ;j]ǼS/H7>h6!Ľzhu4ighwn$LZ7$&pyp&{p! %kठ^(b$a]F|8Eé+"9\ǷEȨhMJo@yHMEM^6u8};T:}͉=G,o>b_l|MOdzJq$K C{5!Ȩ`$xwC\ sUbUIPy0psڙgȖ]Fա:=l GgX #4Rgm>H=!&ji.*`wf|2BTdvB-jXW_)縟l W=) vuep ZFeN-`6e#7\hd^~xÒsSe+0q|j,T&>h߿|9\w(=+c*HDH2@pX98,+- 0 9T`h9 2irIHgUgG#/ ^!BuK"g/Yo@KA\P {]-6E[{]1XG,Yu$gYK4/'47b;VgϏgm.qW7mƥK|z%b:A*4\s헟_E-_jNWQ؂7$ݠ\4HYO7FΚ9&5هd;B eKBkt#[&Ƕ걭W7|6vak{{:j'i837U F?˨Uշm(ا1OM@~!7&{RW,Q4'`ɨB.SQWZE zZ)+$>uUȅQWZJuvԕգ+/XgB?F]2+.1|3e|QATO?vhDk?uQgM^Rtc z綒T,W/")yd52r`U/v.UjETi55g'P(=uUTUVBVz[ 8`MɨB. PQWBuWD1|Ea?F_w?ol76Ӡ%D"PP&hicg@g9CJiR?(~(3Ri@PQ45K`}*meUow~Z\g6`zc "RI3٫\u:v [t_g?_%PtN?\*E!ߢ5z R BO@pjKC[H:t5d qֳݐ,$P3~u5=dvc\Ǩ_U+ i=z)K `+e%u ^'`k[*`5`k_Nn|ѫr^d|NZpeJĜDêaA;0nRII5eY,4254ڭ.S7)bDyjlN!A8Υ_cA3NQn92:nŽr~Hz>_2㟿yYQ {ei8x=x-/xi P [Ex;d&wOT<_C6oʠ Ѹyb7K`ǾTH+N VJ!j!j!jURLEq<;oR8VpPr֑"R3ELF t EzZv Y{.=`ѨvvN#1Z!dZl*׈ٳC(PPS}Y1Ї/o(׾PȽ[9 {6eb^{k؜}.CZs7q{^lvL{tv^:U`'uߪ,An-rS6A@ ~fE-M_g§͜\tgkD G `XA%&bCe\"`y<|bYBΏ,XPX(CL< v픭tQ/RkK@XDVmk"ݠS 2fr}rO\[&],P t~IE%f7K~/'CnqEg{3Mc" 'Y+])5(Fÿ!5nn_ R-uxTҋ.+1Yؗj׊"[+¦^?J!H)jğF W\'mk)%K#')FG8,AgT6jV34<4 ۟<ȝhY'qw:-c'0K )LKM4C`2 0YiQJmf@C.Өh)vd Uz8xj$(rY"g3 W32ܭ?8V&cv2&YvҪvOv+&br~m*s B bxƓ5+x zMDZA]YP-"\JsI,mg*AL\YGmYFYYVeDs2HJ$%rT ERYYlB*ЂI^$CX)@:֒ )68/ ]PqLs5BH)m|ɣr]|&%Sgtv.[*2U_T,W c8 2'ӒKs8ZZnDA4uh0 nONMЇǗ|BlX{8###&f4GlN(&HeE P c7Ldx-\q$:%[?ySb&RQ) `( YFtά͍FZʊo7t}ܵs{ WC{*$; 'dlo~zGzJf)6# %MOp]`pczf!P/^䡼_/y0J]d/$")R$F2 O!"4RKZHk}^{Grx-s(0NO]!x#Ur9~tH-g ~tZTԾ^qewThgF>{`yc3 ZL$2L Elʁ6?k{NE&8ti)"4O/[< <^+҄yП0 n{MwSr)P )E/18-21iVg&rd.{gʃBX']~eg}%z "HP%)Zgd) .$Tx+bnrj3@6$+VぅTzyJͅH^E55vY-r6h6N˾Ꚙ뻾' D<\R%,Z|}ȓ7/+X)^2RTV"{ZP$Acy[q%uF7jJT&QlU5#cr,GՓ F阭ѥhrN cLIQI֌"C}-J9.B ՠ ^:m9ZQaġlAB|]]Mݼq-Ќ0$Lȴ )ys4qPЌ6#hB͈FdP?9sI2jnhQc36I 93τkYc8.}ո/Z= *+44^1nIR@L/84s5|TϜ'dl2f DC())͠Z\GpLdB@K-f2t2fAIۧA(64)i4*j |? 6[Ĭ G\ `%g QIapFdf͝(;߂:|$&.}S)zϚǟݶD۶֬o8DM6Mrl#B_򡁫^..Qޝ7&8)d.}")GsJD$$x]'#zn##zaS-=t(BJ\FpI{OSJFnF;%^."yF͎L tƳJG`3Bf UjIS^Fz5r(>;s|yl.^Vc`uEW>0UL[l xHxU~ZwwgFA DmS̈g2 C9KQ'yޝ,-x>ql˧a}.BI1KOP99TDN2UrL&@Λ0Ы0֟AL.WG3[|_jh#d&>bdxXRHܗ)uGOshvhPnY@ \CZ+ԆقahTfc%ڛFn%Jc/{<=qj< 6]{oG*`716q| ܭ3E$X ݯzfH$RȡDc5]_WUWWynvC V{. 9)iaQSc9rh ߦ,ݻ+SPL.\ úp)f.h* uH2&7 EɵSbǣ! Dmh"[\$}Y2QY>'FZqICd.M)OE/Oc߹ĝWR x]eI I͜d.dfs*{V*կ(=N=x^t{Q>d7d+Pa 0ExϤSwlpn{W qz7 N[ 5!!mrA@9 ƭ>V)>D>l|f`~/t*DŴVO1vr2Wr\DOI:حU kXQum\>)ݸ}Qoٵ>Ux21¯͏ŋOՃӺ`I|n?gʱz>5#Q^L\"f  e-q%t6S_,i} |LvAw5ywrWtwN.kuY_Syt<|1T8d%c ujRk}YY_\w wG_uˏzۏ_~v`&d=H|k~]|` 5f`\ -__̮=_ 9#!Zg~  gˆE-րpvQiT5֨^W`vu]^?n+@P}!|H1,B/_f~Nк~q~t]߽H|RXI閂N#`REɂKB,фzfǎKfqֆ)}~HP(%a`+@NK8h$2lH("N#:0MӦĆqMĉ޻.U'K]nt:.-Z_v֝X]sև!qg\+*Ƙcd4 IYGH(򹖜N)bxЄ:?d G Gw2z[cR;ĬW逰 9QJx- l8{[Y 2]vwikh=u[i+R=HR  ~=O^}yuIܟ`ͣn\#jAS[a ?ԆC*F a(kT/cJ0g".`n,G (F$1%`")#"b FрG!eLDa\cinYhjn8k` Gˊ.:WV,HSYAs|crp<:q @AR`^3Zi P,rɢ;bRD(;yEh4BM&PfTetE2 5T!07Fj^(g=>Nsٻ6]oe4zEY>f)?`0磫mNqqvN`ϿD=vBR;52oG@skH_ DJf b#FL~g,*Jmh)-Ъu]nyx YlXOmnnӅ :~._(YYI"BDMfeqfr1K=ӟd%T_ŲAٛ>Ojp1 re7y=W'/-`U[W\)%*QId W"\iV](4AQ9Ⱦ0װJ9~ kJL\KE,.\"ͼ8O^_6;BV/gXd'Y5*7s%y8)-CSD.BQ==: %5(XIڂ7N.,;f:XmX8vd,>[10Wwq1ɗ15f#[K|q;ɎPKWJ0f1^9Ŝ9s˭)O2^YɎPpEǸgqVuZs:뷾\]${&5'oEKOxB_ߚP!.}0i<5B ?;bBv*Xg#)ī_G98yĊQ7arŗ$^&S Ũ"ʳ!4;7ʏ\ NQd+~u)\T[ko7`[}kӕ^gIvhl^NjƸ0N߁KIGKe8[$1&7;L%)DMHT9M_&E_w&_܌6Ir8ӣJt6VdѢyn-hn5fN] AT-LEQ|5fDwa~.W-@"=6)Nz{5kL)[v50r;/&Vg6 }F &uD9$ ,>ib d*4 NCzjh,6@Mm4N7CEU*C9/< (q0-:Bڅ7-cFZSE3h1Wh0K M ]uau+B'qtlC6T L7fN A2Eb$Sĩ;䢿B#?S^'7Tn|RS"WCyRX")QI[oI B \%r:j5{JTCpup֜z4J?:\݋`# WVǁQ~\6z/P.>🪖r^E>Nk`儑 NOP.< &eQكɩ99҉VJN8'AQz~+&w1$rƦ̲JcX X1UFA?έYi*%u:Ss;c0t7E ml;oUZDLJP5?~UQрlQ3sdPĹ%00r磳q ;6 tMCbTh4ү,z>:O\Bl2輑FJny/P  cD2Y+냉y'"&Z| Xy$RD1|5y7CM.˧׵w[1Ǡ 'rw[A"~v@Tvf]X{/,O=sy SòoZK1=&m-dfy ,_AsVdRuk+X J]Mu""xW Irr7L--rWf$ƓpI0Ӕa~1 Q?⾃j;^];Wyz:]Qb{fUp$}wtUV/YAV5&qLԸ"m=suGCjkYQY'"NZkS~й{zh (OF'cކ ' Fܽlߝn jcdXwE6b4RCQrMB4 ޘL+Eb'R[AƵ9hQ·E%QkwoPl{kBzݺ?uې`u8 rjJې`H\@kf ^UY֍rB6W=k;c렧[R8.9A!mW\D9T^ͪ^'2L8 +IH\{& %r#0V(J&Y^A:Rb&J5veZ&k8zہ>ssr1o *S;n>yYw~ &6d.۪I(l gb QCQPxflCN_GO`_/uK /I믏zdcNɥ* Ì7ܦ=pcp[,F39rYQ{. \w:J|A$eQ1 dBB 4)[CIe=Gن姘< ZH%!Ыf&tѕ8Otx޺"@kSZ.R3=ٴt\dC皀2S2Eo;M:9i=G>X}R[dQM$R"\gN$y|6 cLĒtsJ6;sQXcϔ11֏긝 ؖ,%HZ;d3;,˔N89P(] '|QefAy-i<*8EнxeîNg?;T~-JѦE 01'3#1 [M9},w7ӻ;d\rh|pljx<.oO^s>x,3_Cڞ1=gѥ3^W9u+.Z{~ӓqq!ǻV}-|{ W ɽoMpQ6]+\jXդ`\C^4ީԀLKႬN39s']I9 'ihF ) DaF;>.2y\NNMIu gd1s/H3c'3q֨v9}r7̾́BnzWX|i }~EN~vSjzL5-WW;DZvghU *IkVHeU}JqPyhU  1gS#ՠ$Ed W7C 9jﶉ)B7+W]G)|j(aYcgUb҅/6WMQ0,4*bdXnr+=z2[QJ';:Yƍ@Yr9\xd-M(Nzd02 !YLŤ0*['Rid *dl0%G4X-]`"$ϳ\ZYR@(YXig%]TEWg.W!2,{ i/ мW7pPE_Q7) frh1$^a^Q8q<(dapˑ,q b78:q;(%aύ/2'{ihٵw,9 zN%Rt`ڤkq%q8kX4(&E|qbG#T.g5"ky"F죆sКpֺ?n작`y|AQ9~F 74{0Яm+^Lf[fxLJGӱzlDϋIjpi30%eN5#7cf=MژF},XVr>6O':gwNխ;e}-|׭kϑҰ"y(it;ѿF^Z[1J"ţ$G&1X@G$RwɖsH<P7u:YohwZܨ֦ѿl>r4VK=[n'lhx3* h&gA^MI4 YlVcS p1;֓^(q4GcM;O-1@UkBE_I͟HjA/;.&a(EZKiT[TF3 UCs&̬Z5g5%Ny*NAvju@ĆIvV0Uji=f b2+ "2ZCiS9+xi=S# xa9ˈ$ *wn|L$ҥ:f., W  3ef;u&ΚsBsv~O5r qmgkû2I!6br4grҋ}rr<+*!9H9FL08kςI-+ ȃ:rs-gUg''/ d^BѤʁukљ8{q0Kl_ίZkU}i~E,YP+w6 F(J+ >F(R"64B쐹*1WE\ܙR.EZnTs ͕Q\2C+sU5+Pe2wsU\\sehw\Y;cbgUV])1WvWo fyTB]웾00udoϿt25hF5\4 0< 3$_ooe93[7ӏX-ljkvLnL?NJ޳}Lo9پ l`gUך]1W$-極JKU%nFIb1 !G %YPQrP0 )Ed,2V Q1g%!7mRf<~3;M IJ;WH`rgJW ^)wRJb$-1ݦTvnSm*Meݦܬ6ݦe6ݦTvnSm*Me6ݦTvnSm*Me6ݦTvnS -+gY0K!^.YZԕ;_\l+B` 5)FZ7^6_οHۥi 9x)\ M?+z ZQ(~~rϺS#!?{WΣ_%/'\+WGX>'ɩwPKMeJj](N7BwW5P2"B `PuMH@ʸҒٱR$fRs L}f `BUnF(9Y#Rs8\o^rbhx/vֺsx~y:67_eI*hS1 dL0x mnSs) n2.ƻBo>ϫY*?cirb$yN͙6 ĥ`iz{`NQG0#߿2G|Q4.*`Ðqp`c5%]ɩlre΢m.UbD,qIl|ΞrJ\|2!EZ P+{_ASJJ:jmbdkX;&V/qoh5<2l6qͧcwk xdz׳5[Uꊭ*eT7_\M7/ZOǫ/\\ͺ*==ɄE_7b-?oxi8%P. EͧoIq35_"0hBN *lU V,F_cEaBV MSh~L et@=S6WYj_^Og*7ک9ӆ?o8>,w(]kYPrxCpV-lc,M7#/M7zl֏gAl'qwy[A`#)&FMb~U JIg$UdZR0F퇯TRlDJ>d ,~G:eZ# 1Bbg7]R\2Qi+k)M9svQ╓w-Jt5"*GrDE'ǥA&KT'@N,WtK`qVia=d%3PMRq xlʗ?P90]&cXkCEb` rHz9!ToJ=4qIy]-cdc *e& ,TCh2& 0YWI 2!!cbl͓Kj >twU1 m,dД41)kpa xp$@?\t%a-d$凈ՠ L&G?T~sK*FJj2S$ x-gElVf^(_&CY_j@mN9!ٙ|׏_m1Mc ?A3A=hawMCR.3H 8͠|FJ*p{p*~ɲ_:j}=z$G`-TfW!@LEOM!2S ~m75GHxkg>kG'.Gפ}0R>:j;}@>:Z@RH7r!rE_#/w֢U%&(N sSv$]YZm!G+ut{ӳ˟2Y᭶GLη¥RqE})91Er\Yc$l}lcQV2MŢW A)V L$º$C=dзf {7ՆS-a9#~T F͝e=W=ʏFnׇ-[̓=t~'湺 PVţ҆jo=tvٱ%v8?rxGܙ!"rb3ʘ xݤ0|G8ǡ$fXhMHv ҏȃt]N# iAbD`3R d cTDl8 rm1E J5kE]t3M>iWe^ /˧6 8 ڜ]\MoC^>M~n{uv}p6#ŗƉ__9Plw׫b&cWKlJ?ZXzl3ۃ=~"uJ.3H 8͠|FJ*p{p*~ɲ_:J&I'#k .#A7P8SSRmsu1$Cɪ"u *TdTQ{\&F7+O/qvg{?[z\4nc[isy_.A}3%^'s&<;uvw@?LM.Pl$J|NqB?GyWq,Ɇ~6|zv|35xc];ri叛;VwK~&KZ+a7nc^Ϯsh^]0@Id3ٙf Y$mXrϿb|ݶZNg&q""Ǫb/zfn}#W*VfEP?!) lJ]z.L. r:c*;JSlF0%-wuڝ, x4B&͐+Q&CXRɢ&ő0>!f(%z|x96KNET)N1x("t!Z/fQIatdT`YkM:i$!d64tdwLν{ hÒVn@(%{5q]T*P9˴G}`#T)J/} ڴOu}~>98@%. T%J,*AV>xQ8݅>QڟޖR-jZB+aY &x)((}9/B\tn'@19{}i]{z^[ӳcvԳSxs"\q]V}O]f="~c.&;]zet5~zxä1~X4sӮ[3_ \y*^x&0qdv6 .Vj5˟ݶ:Ϋ]yN?߉j&t0֖[Jܑ+Q' $*8ogC]VGp|̼ \%iIu -:;v$i3so薨JXbtc9($}QLF@3 B Dc%lvn]iަx-ftޫϕ5"FdIxusbh3mbK8q_fZ3oEˌ7CA Po6rY$QRɖ+u YSI%< B҄ $FZA}q0"aDu]m?::i<0|Bn5D1p.5+ \$: w[/q\\= )e爹4Jt$P%ZSBff!pl-XHI/)FzdSHQdсyKl2ur'YqQūQxu|1t5MzZJAi!mg~J G%<{'~M̟UF ARZP; N@d}K: b3.88gFK%8 \5,yE\NS6d{0[?||,B>bb}t[ R_Nί8q>= (z|KsV*bi%'%U&лtxq1?kWݿƛ샷ˊ)|2[vI\No;H׊UOI ~٪LjՏ|c= mLQOe)>7z|Ynqv["W=uZK%FHy0steqr!?ܚsfRMbiv2bƢKzU%% {I}/Oי Z)\h !M**$*%YUCrBy2^FkbssAFSQ:wm0k(A+_6\B/5\:F_{v*ՎT*p4߃ ,gI/g⤑>$"N"b+dvTݥs4IeF'bYeYщ.E3XЫ ]``蹰7JVDFA.-A 1yȔ:st42}fa\clnomJyFe׋fu7ܵC3;1Vrr2ɎOΞgT &jN 5zg]ȂI\(茍J0: 1trkG4$z]deK: 7F@-= UFt%,wY/@KFw^uWL.nb]ڮ,YK ;pIt&2eʸ*)ew|̕`9PtFH_}~~=u$~ .̝|vn4-ǘ?uiЀ2(I]YmUWod(NzTQAx5g"ZƷ"ޑE6qsyԖe;Q|o@ٴ!CvU +=m H10S6Hٔ-W6eM񘨫Co.DWuGmYHB(Ow>=4}蚷 pN!ƀ>0d9zx%$VXy6*mr@EA]4C'A36C6ΞqxOc9@[@BLE#gL 2) 9\ݞcdNP4ݳA#6H^MQiA;{-$y5x/:e%Df6s eYyA$vJC>53B& \"Y6&;I`hҋXtX  LCFo$ut 8w/S Bϧ6r /9hNAtVvlz]X!M(EDԽ-~L ,"җ"AoT q:D2up|"F+ 4UQKT l L딷`%iWM !,ulZus 莿',ξMlLu9(gT\|f?x9&i6p X/TJ&%0MG rf*Q8g΁,;==q|Ό|5r5pAOE9xV2kG$s$2dnX\f% KA{ށ^#W8qժF.751b ꅭǪ=Na(ܮ۩>]jW]KZ*\=b'V;'`[UNcb'z u%K*A]?g\v kh`vl#&{gןo]&ӡvΣt3iCd.~붎>^(ٽ7F_/cBa< Uퟟӿfsi 3`lz~׏zv/\eIYJ\J!sU ^:J0 9^3:}Z\hΦ۴y8r{dJ\ |_,ݒp+:K`D[?BRQi;\ G+)]+Ql;\_\pZ'ڛ>n YӲ>bK&+UIRI- *xm^b^bA\:Ĥ<F #CJlmv9-Ѐ!tD"1Nx6b- 6G&LsUʳ1KT0 Ƹ: ^m?s*__~~x[^2΍%myӫ^u@K_͊~"'j`hlk7P6 R̥++&:/t vU-t2dNdU3fkR49 k xWIDiB9}] c/''$4gykOyIGEuIF Y0I2EQ F!ٻ6r\*~y{3UAGXz2Ud[iI)KN&8jfw[GRlr ļhDcMTTݖ]_Y\_֑kV&ҋ 9#>"Yz_1eMoȚui6=˷&۩%#ar*_q*w=ԟ{p Ί =>[} סIz?R)t%DY63:#ߑU(|56Ip(bcjOȴ*/zdrU6>une\2dG((2OC^XA[ث_̾ήf#Ukm{F{[{]}Zc/p?P?n:=Zo{m}r֑?S,8*4m?˹{[ȸ߸ʵ*9ww&Ās !_^t?!WpV`f;dlMV;P&=}'wƽy'lZ]Dk\uuBjrt"+a1%Lvzt"6(՞ig<sS}tnwc"U*&6WT0Ɉ f'SFWֆS 3{j&+gW]}YƍZq~kc[c1OPh+v )MͿ,^ؙ+yHQ RL!PLiS䉦M UeTd%@)5bb)"PA򦘾0-6B"U2V:b(Yd&g zTƈݖ#X_$qem?ZHjʻ+%| >&y$ aTn:Z#R*\F0 TJ򁠲=qlO*^_6YP8=8 Xřvbm#E]eUmx22gLUZlO7-ٓo.i~kuѿ:0l^'8)ES.CR6:`SHX39L/E8Eh0Lb[<0%A JqBIE J᫣*- |U09s(v} 9XK ZqR)P ktP<~ZY~DY/tv?4}/È7^ z9\oq3|s$X6AA Z4)J&o!2!M[73WXUxf|ao[YVcnÃ0^x r=hÊV:eleΧcwCf[jP.o6<ըPܾe2SvSՌ-q0_LU/\Lf H[D5}z!ܲgβSl " -A9˾XP^㐴+Ѩtv‰]pͣ9;p@×\.oggECe);t{THadK6W& 9V\Rk(4'5 z>wZևuxRrk4o],GNv/s OV}rgRL{Y30-Ʃ֫kaŠ_oo~. P.j8?PjTGB\e3$(Y!n6˧~x{v?O7ʾ [|SÁ?5S6֧汝=q*(m!tǫNՄ_F> c14Bb.E`l8 ~|B0 Ϝxѧl<Z=cqg/ a8Y{Q?*2.$YUOuҡFH1YC߼#R5"M`;V7l 4*8.]vk9q']0|Ijɵ K;-VL?nWS( Ի W֒WڗE<Qya7Z_2$U>X[98WW$D1J4[Ul\NA%k}aQUqurv6 A&M9MAB]}¤sujgf芒b#Hi[Mk* &t ޳mKlU dckY`;A U>R[5 ThL9V:UW ?g4):E)Ks$ZH6" 8L: U6$@ dktt솩%L[VNho m`|C*th2ebb`j1 kΥťU7\+Ec ko7V>K[|B! {=n/F\Ŧ꺁rvw[ ‹?Ξ<퓟}{wlK{[g{ ]-Zqn_r~䛽{{?oL~˯4X0v}xrOuÑH,EeGzl;3C<|mfp(`TrfPUfO*8:Qzzazo4wh/;c-\qBqų@J1TlV蔂n`-mUc$w+hb* E_[.>:鵜;_ʚsy*Vt՞&l@(ޚO%'dA"d'1E4 %ftYQQ ^0B"*zC(.PMA+щo଎գIgr O8 Vлql rTiAua].ltnz >u?TȍK7E{j@h;0C!Smorq[&}}\Lx⾒ݧx_8x䫎;;̼6r<\_o{EwJ&xɷL\ ˖4PvZ>ݵE8?18_Ŀd$bۗ.s|#&`C&t3៝+W0 C*UY;A&*̜8UDXU?Iׁge?g5ا}e/A6V4j |FZB1Vr+J ¡)v}Ss92>-cOw}e<1Md7M)*V Qy(ESruZ*UB((Ly429{L2 0\{%WCt@1`#aIt~{X[qu&le JW84,L E)Jü< jI1,{audQdy@_N>D4kko5װC׆49үir5|__*Y>)Wr^?*j'!='Mb}tWx^9=6\RB@8<8=zp@ A%%kठ^(HzooRWj+"9\ ӢBdI&7ʃ@s9*44rSʹ5M 37T|5G;z+zU% yJC̣exy:<Sj @^qGrB})Pࡘ]OG0L~EV'0J@fVD.aۀl  {|:k֔z.hCL$]T/  F8ЈNНrr,|8scܾv,@ D5RU]gmqE 2FEPur-9HǙӚި̄ly Ѻ2Ee&d+LNf0AirfpZY?ΗW<:]vfK&L:X(} os'clŅرIBluT&H@xFN &_F/&$ Ր"4K_HdUgM qPm9A5xU\HleK,M.q|$LP =Mb`"2^vq0."㎶HmIN݊y"Nx郳 y61㱲hE2$U@$* BFaRY\ih != eM)KTc$օ$hTxl<-+>'^sy-1xDT KDIN"nbIJc$j:y8yBDAkzb$pK4tB!lpєm(S<(`)G DFI{YV|NDh(hSyZ";/̋UΦrQb'[-^.<  *Dm9W}ҿt˶W};Dyu} SD(TY'aQ p(yR0R;(DK &qa,7T+r .ŽpŽ[-сVD,#.Q L]H/wFI>)a·X% E}<(n=a>X28~l'tMfbM8YvQ9?gFrj}Q85YrQI>@շAKE_eQnzzڻOs-Y3~/>+s7Q{8=C2`}Ȏ)(/r߷e_P^kZh*y(>?P(mBnnC%9dz<|JيNlgB_*/sMm%D;&`v5{RP|4z0}]S3}u˜u}Vޝ9-o.F|[~_X]8].>8T|M+u8N圶ܬi$'O7 ߊ8Iڞ@htYӨDcp<%(?C~==>?cq~@ ̢puCCm<$,5a+3gW¸"-gO7_a9\&Tg(b-OV8/qWpWha[3# ri'YS]cˮ6Y/`~MSJTTWPAL#AuP Uu>$9[o[brԗ ¹e փJ*)Xx'BwF{{ҥϿm8 \ɉY "N5)QZ$UL. HS^ݡN#:eL[XO0z󠾉z^sm'=WuK_R7y C{1F}<+ MTࡘ]OGMPrH,8 Q 5?{WƮc%_ $/V? a{ű},ge) )9PU! $7e vbsAFoS`է֝|` ;;0glڣoJ'Ju$J @,Y~|"s9=}%:AObNdjJҹGmh ׄ6]@Ƭר)i3`- Bี1!D"ߊ"kѤV4تCQH**vvugGwmN4^1naon/.xz5뷟z[ۜtb$;~9=;t~ sr)R``_4%"YW #JtP=g ˄Ax  u /wIiST1y6ȉR 0'A!M%]BKr<{Gu&Lwk$=?;d%j}g_62' <5!in9TeK(`L51gG娢]S+ۻ2AZ虐B)ՙb"{: {(HH0\#mtAF]u(o'qZIV5Z9٘r(KqjR䌮 =Gpp2ti(lFZ]G6^Dz[vy2ى/AO75}wƑ011GR h+VX TՄ;! M?:#}YicydZc8~.i1fruuo13YaAmha> m]rl,,Z E#dž/ZΧxKWHr5>^gk # -A]H&Z[XP~ VWLި4xb8ٹ&sNνEdE$SR*$氯lfUMڲMZANz.V 0Vƞ"W{|M2kG^8z:GR?}3'4+ w-3n]>rwyh˶0;r*~(߮>eKlIkW~%j7i>|aWWI@ O(qVZLlטSIm:=&%”8ng^@+T=k߇d9:* 7+ >(+῕w*ߟуYVv [ *wt~_&.ˤ802)L`zG VW;ҋf͆t)p^or=:P`H*9e3(?3*coU޺9Ysb{:$b`teC5&{O/XLetNe{Bc:k/i/N|FWyK?9]'W&W~+x֐˥!7]Σil?!"P[?7j\BԇP5?VMY };Y`7[5G5[ajl?ԱZSrб֒A\(5@ՆU)ѳ\t✢Me+ʲ6}r4y'pmK 7&f&maѱaW`iGEAq rm|N얩Sc必(}u&p^S%bo6_]VlkVft|RMt]v}2=/g^y=^_oouÚ)z.&2qIJv^fXl[_<Âڵm<~w|5sm͏e񄊤4}8")"nPT4i{&%HʟX$#h/Wܺ[B⏏p쬃O8* Brru[-`&|)}nt>EOGST2"$Q؂TUA1Ѻ|R r{5g/r22ARȺV"P#ƺb."+TI(r︖^kxc'qfi\Cgۛm\[F5o^W3Y=j٪~H`3G2v竏Z ?y}^Kc}K1C-Eh\ڐZƭ$yƎ=6gxH^88{N}˓H5FQ? X VH N:ֆ ͸#ű$ЊUIho!j*Zq"rvf) k13&9җ_fngicogi\\i0rKlovWl8fjozzTJc)%bjTAuC.V%c&-a_]Qɺ`:ժQ^f r b,Y*j\ sQs=&ΎqO)Ȥ)B׌MSPG Qktj0M3ld,[KRõthd0d/ZVrȦ*h} +PuNZ[\BT*4\=rաB@D<{_%'0WObo$7gZk͆E7Q}#غ9NHHGD<;mbv}YhQHBh\'B:FCv9l\+`abIaIhX]zq k ȖzEct $RE.JQIٻƍdW J}0x`'Ab_20j )(͌?$u-eNbGd]J Ø01gz^NVn ,m 6Tnn׾^? 8c_iy$>rlҚG,m r7]#m~7.΋ɗ%R_H6}["]"崎!U4ڡXt;W@__߃te~~~If,QYאmPea~dN 1X7K_w7k1r܅?L{`E>ג)E 0{:^KUJ&Pݣ9zTgu/Tk5֥TVE b"aɟFȾGQjG #@Dlj5-‹ǎ F~aqn1 T!E<jhGܯw=1QO/x0Ypޥa'0D=A\ gQΉ^DTFb\|~ۓINN-o:ɯ$iʰu{Zm}K0p_DB #r ICnBy4^G*#R"%1β( J3| z$@2A:8VyCb4Fº {nVC V{. 9)iaQSc9rh =? a{:bEp @Y0ѻB4p)f.h* u/H }*& s%N4 < 9Jj".Ii.)\&b媥-|e2ngɃg;O^īq0a4NǓct~aKWYtpEaٙ)<5ԪXwQ:Hx hk/EcvÐL29+00EdpRP v|5(g/a(\ !ë+ȝ4'Obg(TTJjQŧ܇aͥcŅVٸZAߞ(4R˩Mç9gDdksjZgu#+WowՍ7YsxmtE uT-+ڮ9bRJ?*>)W B>Gٶɩ$zmk [Z˰1dңe&1Kjc8ۛPɖ ̍.π~H?9O;yÇL|xVf]!p!na+3gʸ"-_--̴sҧu޸4A[g~?0Ec0*3<,?,ϻE[tMۡkurKٯ]J(1mĔK=Ҏ Wo k죇E/ I6i" Q:yp a\(e!)MgVpdFIOmXh?)$r74GT"AXkR8-`8ʰ"m\ #$N':e/DzWG0%4Tؕq_xOk\c,%YE5O9/* \"P8ie1 )*eGZ*P{QjQz{!݌ J٢v |TYIҊ_->M {v:@1D rvQ 1rT=JF#B{{(!rƽb3((qE4dR@(h"7lTe><Q|(_ F8 mPQC19E@2r'V#)Msb+nd:;iU=yuR $u"bRe YXvT3#FRA͓q7u*>xR/p*%q#!)=BPQ0Ȥ' Dot'{doТEfw]7-{0^B'n~ˉ;WƅsNX!@i;)f܂RI#1%J-uo~[#rO4SxK9ś-&NyU…onR&ݏ)M aJ`ƳGB@$2f̯thVw\1Xiu7{/S=0hw_Y*G$06W[:Z;݂58}eCRG .A/k"Th6Vre =C[0˿zM({D`RGSJmΌ6>iyHI }p/pznW_/{glWýܧxª/=z&:%ƫ韟l`+D(\!r -%& 4=zZ+wtleYB 0ԺȢl00}8:i\v&} )9wuF8W%{g23p,jO}_b}Z%9?Ɵ!92bey!ug%\ĀIJN!]bW o'۠ 37lfSExכ<6Z|Ӹb7mo<}ÓܾE9^H2NyQdF9s&1U2hJފJy4rQZ,fHGG,^q@G=kᝓA@q#'ݞcB)7j<[֑av`;X+&JH!]cΆs7F _Я$t:wGX{y~>1[mCg` 9I<ס3%a3 n2 i͆]G/p)?#b~ՃLx2>OÆ+ן>lGQ()^+f 0Kf_Yf^BT5;_pه i~@`*6Ih:ko[oD^F8[m`ÒWFۭuR_}T g;秣bE-fml>-ZB/?ed>k¦97C.C 1̧ˡ&_(ՇnuW/5ZSWM-ܪp8MLc}!_wk'*kGV鈜d9u,Иx0+dk`8 Yl^[ᄖ!b8^<hl1( 04"CHo+@7Op'%jl@5?`qUF[I.cw#Uގ%|LE]^mx[~))Ru A!R1o^Ef(hw >eEZ<;Yu,B+O( 9drN4胊ܡH_-Qkhc{^I= #!@{I.)} )[4D޵qIHz5P'崟_XwOǍJc~d +buS(ꩱ[o9f4K0C{niA+ v0^A"8,`% _8F@.<׽ u+H VM$.J!x&5l$j4G1r%7+wW>K`kyj$^-/ f?YGjI1<v(C 3.U[]* ʢynLMIug W^5ƾQ>f7 ɴS&(`@ړA#L\b,Å_ (A뼹 WE-5iatHp{`{ j 6r'gqz8(ScqI PKA5<W٥HբzmGO*-tx8]\/8+.JHnV2rjih0=Y)?fԴiw >V;o泳:00I먈YM[W]seŤy>B!W=1nH{7 v,@h}(&~ &*U|R.W}.FmS#h{I6gJ㸑_K2~}?= |le-~PERD(2%W(ć/)3ԑ`T0'+<RmyMk>d~ίx9M/o?/o~{+{/g&SMd\'1OwJ:S[0Mֿç_&oS,f+t^ȓt~~t>='L˃ qգyGk;<ڋoX<׷|5"4~1[YfgIx'~Ou'0j}tM|_>m.0b ^D^NW)0A)$W:Ղކ% 32Fȫ #iK-d%#5-a4(\tTj@?ir3h#ԷXSvF8Q^{k7xU=H.(9@a3 Eie&PTVho &{QrAl8j<Ȩ!=kTqޫJ;:2kN:cYuy dxN)SfRn`0\T0vj'S='4kGcW DZGUɖbGaDIA*'d[uB-O0rŔKdE*Q&tI B[8;c;>gNe5Cw{Oinԯ=G<ϗZ>H $ %&@H;o+B>IRѡX!~i@6],xQ jìR(N6D6ɦTOcXGٞ`(-r̎T4>03iz-yCEZ'20|#Ws4۔^^yzY޷9;,ۉwsp9|{Obh |:x9N1H!30$aׁhs*_|T~julRCN) ~`\t$rOjOJ-I$?Jx=hQesD~l#"H (H2aAfkY J15k{C4OڪC`|/)IQKT9' S"Q8@3!?(F+NUÔ>dMeieז뺲,.6 ?nkVoe.&SEk6fv"RNXx XmuE`$mKVR5(9,=A .ɘ-y&eI^$h+1y!2M)`^ cWQ@%C|6gt6/M~ %c˦?@""VYa W#sl xS(&I7[h\R>Zf c~Pױ1Ah02oDAl0itM(cJ' oHtI]Fcיc3C]iauc[ ݁s2:y,QhZUqAK*-ΫRz[PHC5dz0j1O9)-`2@|pnfم!"C3M^.MJIReS͒W0XTJP[\։ug|{BC܍^];\h29PhYm8Nߗqͱs1P:/oˋ$G3ݘ49LdBCLχO$S2 h$MCQFEmv%&nJZ8dkTlc@?K>AʬӥJ"96d=>8f譅Z{]S{3TLv4Su^\K7Ŵ2vNDw_6KbbdM8(A!QgD|uOu\/xnt-D]tιj> ~}/HT}e=8Krx}{@ٜp_BbVd~ûhc|daYd3L@ >QgO̸S|Qf@Q~VUùm!g4E9ᢓ>A(%@-R$CMD~{p]PjQGfn5)ƭ/(!j&MY/0>8(*S)(4$+2"=$#m6,(xʦ86)HR*msl{EQpVlW|z=#bnIź_}KYfe9qYGLA?_]EկokfO'kZQp6ҬdtzxX,⦨C6P{4 #Y*1+:n^Y@dJjlI(jl:'^ƦJ!'W JAJ0HFf b!6Bӱp弄o|>5ugsoNj Gl( Ӑl IZS0uvh+2ms`'k6tW;]!)cTZUADHVd {gX(WOk}xJ/'iŸ j7[cAm."lAJ1%k5ofKm)^M6qml6-23C lEΖ&eA̒#dIبfyBcpq,xlu#xa[@;[ȭ8uy,rypHgmSj!Sv属^JM#Vprs/ӫI0mjDAآB :ô ^ I6ƣdi JD\A%׆ZЪgQ*_lj-|W Hk~ҳ11f ؅w᫐E^һ<<\jmMb1?*$|k=ni|kgAs:}0כ0Vvyrx˫;/k~H;|)=Yi$z&+o2\]+ok~3Ow|aYnݕxWZoe*sϯf ᬸb \֫!YFnHǑftF:HQBD rV0d9Hk \/mu!BOռ̲6&OeEFM >&פHaKKVPbzc2sjIVzy3YMV\Kҗ%w?T-hLSyU`}hr]rG,%JՏo(yDpUF}4pUŵXJGkDi;\p=UWU`/wB \UiaJX;\;\zNe^X''w}/<,}|z7d#v)6_O蟅>/럯rr}y35l Rw5<Vro?7+Ips~1{Q1['0Iӭ_1I0_ǓI^ 朦=2,ӻ8I?=ep"e [Ey{{=O/ይ,J-$gښ۸_QeVF<8V%/ڗM`LQFr,uEPqbK@LF_:X{9'=h?NOfΥ|R{p_?~ Z_D oRi@xJ"@zfxtΛM=-{!L-D~'W/'%o5CJ[ 9Ag)2Yr"XixfKAzfKaګ̒{~R ֯ރw=Di Cp^ɶaA6dҫ͗oTٚzK[&X >èu`[nlAJ'In?: xez~-ç'k_n%2Y "wOF!gQڔ!Dl~^AS[UFbJ*DcZ0M w҄i1 Y)5$@I R 2BM~~ibJMjO#F@xcU cNP| K(Ry[ ޸5}|4PۃN\Yq\sDuE^V{ԕrOO<iGaM~3?N5Yt(U#<q` |zocC:dH!7Gkބq|=g<~Ǿ]uFU|D!׶ڒ.%m!P^.yFarPt1R?rV8/NC"݈8iv޾.T0i}C)hS觱ݣO.^gը.7i6`:E_ ?]>xCUzѤGk.8G&'/;?;fɏ(IJ[:g:^6 ,ɥ,2{ڔ''> Ud|9_z|ùb\6gE KV7xHX3 hI9q(ȃܹ&D/afNpw?KO_>w88T_0My ܜ +za;3gDW`-R~IyRw9« O*ru3Z0W-4ŲergA߈S+L="glɼG^2nkXQS!,.W.!}i@-+m$F;bmy! J9\ˆRQTh$m1Fzjg\Nz UvnۂvZ(+48-fѰ1i[B )ì @ȲT tz9i4'lx>GӸm9Br<FܣV a;(vЙ(c¬r>xY}쨇@]!g&CHP601C碡sўv.Q&IV=[J`F<9R3HQh+ ٯ Q%cBBt\t1YL 0Tl."|:dolh}*,}xz62#cw\&&pq~4Iz|v~tMF)j}lD3&|, +b>I茴PL^] d/LS ^\,cq30Hk4ŦX*t%7rKz+ڼ,s^]{5Y7~bHg]1XdYϑc{nԣM N FH~\Cv6v5 YTI<,vдIrܙui؃FŸ8a\92l`;6Ӡl=FZDn?wS &H `Z,5-IYS0\XLN.i(C)E$a2iڢV`tP+7Qj̉0L#ꍜN.ijJ\:meISɗ>C톾/N[o~:uD*Z$ẽs6%KePRdM] L{{ {NHg!k0Zkbo7跃}< Ϫw&Mi[=>qh𧙤4CCA+u㲲ֻP9++fC4)͊b#`(|!J&+:6K$)ylU +t0@#*%()rEh4lwDM HFc!F+T[G썜:"#ts_I$R=oToZ 5_ o*\*&/`݋_P5/:ʅC\l`֗E|e,<m)K[Elq4BQm#Xm6GD]n#ve5؁/9fGhؐ)z-yCĒEJ%gd2BHG zIINR)w\>ƣLe~<=[tie;N|6޻O{> G&>aSL)R &IcAhOJg T\BcK$oO6 DpS.9;L҂q/w j ve3,[k ؒR3k(RKn䠋l3ԮCꢍ ,il ̚DIeX &Ê#cb*: broΨeT`D?ED3"D\ŢUԒ}^fQPYGcYXd {ED"Y+ULLX1H#bKЪ`%Dž7r6hsUGvnWξdK\Ԣg\t.ȃ+h&| @6 d؈J%Dpq/xw -{PMº=r7Eu4D~ *n|pHCҞNz{TC|46@6h 4!hՄT,!$f{ [6PwEm g{ȑW6!M73lfbE[-$9pzG,ږ3 Evb= w4!̡&ȒR2ey?3 U.7ND_?/8mtnz ou9T r-i] L]s2:znƇH )c9*frk]0vV潔 vu{E_f=/\pMeL~ vr8[,:+Xs6Yptᶙ?+gWx_eo5nBlqݼ^N(ro5gɜPE*+5Y㝄FpBy\UI"%].NIW{P҉^)pFʁx)"\zI/ۉk!e3䘍dGxg<A謏H>p(~Ay*}|Um*ryGzO.foiz]ьQwkl֘$DYDXÑТ2FD gh Ij4<źh ^bS%tuf5Ơj 3VU$~F C"W%P{pQ1=)Ziv0 J2w$3Igha8fn5|sIۉ:|p ٪~``ON/9v#Wg_3wieލJt1-e域p;leV .n3Jt黩/٨B<W3x!i7mz0m|7΍'xiZ9;9:]juOڊf/jvI8z4w1>{5bIQ{|öw!“ػAd3'j4{ܨ7Ea2 bg 5ݽHEIHy:IvHu4Bx0J\MsʪG][;-=<1h瑕ws#c,A-CBSkT+@sT&HAw.jK>K,hO[6 B2*Z驎7t~RlwG 퓏)9.ł~ȌiH>}](^F|Xd*Q=w.Zz}f 2i,tұ7e_ A=I#Wm]yVE-V5G|g IU ^؊vu⹬_⹭_٬_;;Rр% J.FQB,]A$IJnUI$% 5Ƅļi O#gRHځ7c̫-ެ1q^]BQ#2x= VBGqU`5Fh V}k4X+FH Vj4XkFh?9P#65$֐PCjH@ !MёVPQR !5$ԐPCA'$a!5$ԐPCjH@ C0e1D%&a&ג0S+ɾg1T*R JjBqMm*kOTsYwhܻ,'4Q<ǎI(!j %ՉKYJ{ ?v#V>-7qП{lqf<( bAHI>*/7 L)oETuD[FT H6Lϒ"8hqh%7FPeZJ["gBQkt@-}yf g[ǤCҍ6{}WI芃ˢf4Ĕa(ɢ[o8Cʁ_:1ކ1XZ,rE#,VɠAcY,q)tB7h=@#0l䬨dTF$&)cb┡`:&Yt B(!*R)\B!UZ2#gPbX,,,U>,S6a`8L]A../7.S$! ~`!"H)QیNƑ`.: O!+{.2½ge'Q3uBݎ J˔><-Ef퇃b."Ux</FmMHwJ*%l!B\0p2Y vBZT&Õ jȈ @-:p(k"&8 $# T#=<,Ffy*9}ԏHbX KD]%bkX!vBCp'!>bcU,h\ >HT2 @Y:k(!J ԍA NmRKY"j&Ԥi*BE*,Y"~>|⸙.uGEI ESb8`IBp"hARB !QHFJT$ZGm L*B.<,{"#<{=fZiJu1~0{q7zs37&ϡ7Ve#燹LTE{AuXg8AԠ" 2QB9aWT+)n4tb }nSGpӻri[Z} RCʚ0l 9v%8ڡq8ɣccB޻5i^0x#Y$iV>Iȇt1lc ]ef6`' e;-8@UV $}{O,m՜~ vj9?ϢZ[x6/iw*8Һg_Akk6䜌4 @U8$.WrJDq]J6?!$ܥFօbz0}89}<;Cˆ' ˝Xhuk孕 nD˧&u-y}rDc!bv#ARe1wہZ/VG7ăt$EXG 9ߤ(@(*W QT@ZFmQGznc%t]O2J% 2+Y! e͜j0+,ug:Mtf$i5' x>"u';;3(/عcZ5ێAJtNym[¦#vӳ%L*@H3h-xף(JKR d gWJdpYH/ O18""b)9aV c'_o4ط@dʿ/l%Wo#z!JUP:>3D_1M[TׅL`;!uR'GKxSG25fhl̎T%:#QTrP'r2hK=F^YJDB!sQ҆=Kdjm*&AQXpr< >vgاX; ꮬ?on&,dx֜~9|fb2;/u&V,OX(cuPܬh%$#KPMWptRVļ$2tֻX+9Ë4@GRUuef_Wk❲\ޮ1˪/$0Q|͒R/`vϻߙ3Uи":Ts>wIͨT [ko\~߯߯v9Lَ̜L녘t\grR9NWen347?Wag@›o̓&߼?PG`v=x$/^~d(}槉+3i*خtiR9^2%wm*2^#ns;wO8޼7 ˼uʳ}+thz-o;WjXtݗw~w??M;7\/Aryuy`^siz:;?|0W? ]~Vƃɧ镨g^6l? I)7{?>O膹hTL3;+vjЎ?eӨ>E7\IqO~jD!'B,QZg Q'8z`s={ m- i()E$-8aѴA_bAj)*`)YFhlo&zA#Z[ {s1/dQ}J:+ ijw1H٬Lq6YTֈQ&GuJ3$ W"%ՉDQMv)$M9ZRG|NhL1z'!" ,4`2B89"#tw_d4ZQkUs3w ~W<+$#|!I4P-TϻRwT>߾4 6]N5Zx`18-MdSل'x]] ݧ1.DFHyfNY%3CddRɒL$4S^1I[I7W/qXm|<;l::q;8ήl>m"[(h38B$wE3D mL&$SBD!d҈[A:i< / ^x!)Z=3!sEyiEԼ(`,(jYD~)`+)iE _+r:]DLqQiw,~ޤKqdhP"E5^@nXB6K-@..B78)$V)(|fЎ:):0|kP@I,#F:6f0~aX cZ.?{8L'>~œ>MEtvX|e0{ ᄾ is,W'KI %l2 ެn=99Gf(-DO[M{fIrPl}|I߀4tѹ`\lF n^O#M,w:p DhwfdxHb;'.fc_MVل2_103?, -hZyre\nYxz܅"`Khh~z_!YI-ޡǺ!.y/VRnwW!ZI&ŐygRuaVo:փS*SRZN&J Lu Te)j4M41Mv̕^+Mg_vUԞ J}*PKUW:btH&CZz9^Ы30n7Nբ:SǪ=(!c˄WI (7I7I۞ OO\fL& '[7pe}*yR++N\F꽁*.}*-]*%^ \ _z+Z~zd&yav0i|zR݂+R#\=#h} n9T|irvO|`t'< 8曾߂߿tr>ox4O ѝ~';iނuRIۏ~n}*{RO*aY`\U:bi]*%Be;#bA*8? vKmFz ruq29ꣴ?䋽oj\{Mp/wu~$}݀Co kݢ?ݰ_~ n<]5_@5?sWZkH Xs%!L*/dۘ BBH6&!?e(mW??Os0'}|=ͺAj v*aVV՝ˠ=t)x.а;%Kd>xJ"G(- -!z,eW<3[#>DCq\ϙt+lo!EYB)9S*IS@&5po&ΆWBhx}f_"ty@0qbK!sHD*#B0iMjFaGAb*YS־CJ+kQR)#llP~>B7|bPi{v+a)cRqX >"lr J׷:^rmD}S23K ^ߗ$MQy}d9ґPhuPBk\)P! X> ɀKn'+.s5}[4-fǮ6=2U3>dUDѰb 5Y-5y:[L:mD9ēފlT<16g=JeZ c4FD3"∈+X@e;l0D! IZD+![rSDtV AY7tT 3! 2I5i@!DlY2aN:[͒Gqю81: $ &E-bdrH'b%* CMe+56pqW<{0ndAgvȵ8,gs7>NpcUӝ=Z#~x(U\-ŹJx;W)-_s5gmWfg !UDDi:4:|*8䕍핍k^ُ-ye?J^BV'#E{,H "]+4^٭wxD>ܶvgUmvM.yL Bm^JםdQ q'.uYڬ4ѼSj.vjx|ݸE-˞ZdEWs~#?s|Mɳ}PfiAMǿlE/vi!Vws?0kn)L$Qu2y9:fȽ8Mb(Xtz׹C(KըoIIW)CJ:턒. "`ŜlB9 / rQFbP(DiW<$m6,ky袴QٻFrcW 6oy3;d0 -Xl?Œem"bb]]D/Āj<z>o\>$a:[Ϻz{;XEWP⽯^ɕ`j0uS5=UihT(#X'M-w`K=3aKVᬔl 1ó.+Esnj!+-d\2Sd2m-)ik?Y@`:_PP-DhB~:/8Әcs-XaJq )2+g'R"7>8xpO^W5cT!vk֘Q #H  # mDA 0Ia%WHgڐ2n',Z]Jhݯ mcPgܓ9q7.^Bt߽P M6+in(TRAU DRt;}d|JGϲ>{d/:(ȩJ-H( &q\:$,jiE5ˤ,kHieلeRqˆ.hMAe* n5/kyY42,ϵ_OS6A|Hw})=Wkbz럦Ie}iͿg! LH>mtaF{kOh46 X<(PyOZ oG%rZN m+<Ӷv;''fŴpۤ4`-h+tJyXA# u9NKKKL4GJH@񔍱IJ&D'T),S˹U%t>EBѺ]V\gR!Yk1 F&h;~[4C׵1,.|-D'_A[>ܦ _8q>=&GSCsGV*rh%$-*c|魍itox9 ?k/;?7>^NO> )or:[nv8?_NcO~_cM=mIΛX*|EpT'2e 7?O 6N.kXKmrjkt >?X(8G7g{r.4'5k?a$?P |f`7l~ٝjN¸Z)~qڟ,*GwVRŦrGҺ!g&gL˽ ۧw#V5ZZC׎99үks x_M@u/ZX@Lr<@xo:R(LDXZ1) F;! 8Q^ t66,KUhpxQM?=0i %ѫ$xФLjtBRu9`2%etg:LX;CkJ0W*PtJJvHO-e 5euI@Im0m4Wʽ\rju5[hl Jܮ,ӶR*H+͙0)ʱ[]_:$sVHF^je4^e 1;dݪR6ey,q F T(QI45r6rL+pɞoënTec;ڶFWY,~>m D}׳P s1*e  c0Ή\&J`aW-VINǛ~:aZ)ePzv͞)7q89_H?4q-y@W *e8T6ISI0F5b:?@,e,f \1QɣH@xo(\NGD\"dx-]VJ $;D*G[kNﵑɷ|2^z5b b'I[}dJK:=nRClwlWkipzKW{ S"`*J]ArJiҌ1PnBNeǯ:Nuxc@H0d9zx%$VXT"FME;"; =lxq/=c9@[@BgE#3 w% C ,3 )t &tRjAOlof@ݭ̛%Wh!%Z2`dPFx Ml^{g|dt<Ɠq:U:+bv/H .zK=ܱ/iX{YsQis5eGW~ԇv1ԇ`K]i.4ҫwQޤe:fE{Ҽy`n`AovGf,55ia'8);]Wy զInMnt]Zk0/FYnqEiDttqQ{n^sM 8&J46Vd>/[~!K#>OkQ&7i:I?S]w OSm$]i%f_=3h< 0A#вh={$RǢ,S$+239YyVOoq[Io7;v;?}ϷkE\{2}7//&:hr _M7r%[m$_$By{W9w]+"r6 mR8fYZ*r-2Si{,\8iYWh_ ߳=''b mN8T ]Ỡ3y//ozG!ywﯶԕ9{Ɲ|y70x'GC۷[-֐"=:ֽw~us+Gk'b|/]xtO5ֲS }fz69#s[JUٓiP0/$WQ^1V{t<|@_}o}u]cVsGj.F kָ8Ѷt3E[U.1k_u_Ÿ>b>ns/ ~GYg Yoo2oVU)*S;L'! / .0gzR)yGpEdsBZW(+pkvöʿހg+NKr!['tx!ţw˶@h0~ 73C (Y7` .>wu'H?ƑIXƾ 毛_۾@ӗ헗?ަW_IZ: ʐoCgʷIQ-@+ᡉ/6x)aa"^99|<[.]vyqu:/E?7E;:~g k42TUQVkƙ6 1= m#GEO.{7nI.QKqA[d%Yl9AƖfOUW=Hitب<[,᷺ Vއ}S%ڤ΋^?_k~zŝ6w?(Oz/'J}gPbcdW[N|7zlPm}~:D._zXCMFzJV!5ΓoҦw/7Q9|~]x@T.ô5q{\ͭop V0#*(`FBgw1yz4WXhuіm A3 ~77†VqAdf83! {RktYlXf9\n\HooL|-9yS8tNPVb%V2EO'_IO_"k?KչiTVߝ#B.`}s2lU&n°7z+{7`"?V \iAf!];}l~~nˏ?Y[KcVRheu..*Vb iUDS\}7?mnvo/`^`v~M./t?F* 28n%5w87陇P z VgA=1l>/nF٧↽#hTs#bhyKskPZ;op-l{-125XfP7^->Ma39 *X0B`D$| l=lSvt,evvg%SgM4LʀF$>j9Haa4.;#Q!jl('Fù6 Љѧ@bi#XԽ#&uг L tP+M; \7^z^|s;j㐏L 2RV!-JER ~\U'YAo߃Nys7E盗r")\6NfŮ7mg<}$?,GЌhyɍ V 2qM*PYP.JJΊ`żlX^f;1>r6?MS%( \L J3#b#EOZxR"=@ H #AɈ g7F ]hSBpxzUEE ylu9^z4\3,& H]kxwm+Kی[<[-T|+e]CuA̧)VV'H`s(l l&y@zXqCr\GBQA92h.^ ƲS֧^ 5UbӬ:GfoFpm8u&ָDbK)}z.,a=z[xa']nl7V}~D[{%6qhfA\;q(agBF|pDh/}Znq:!J5DnjfNH Vqr(?U{W]ziJMS H]ՔQmnPc{3+ߖAit8o &CdPQ?!BdUh!HsJ=1tO3L|zZUOy;[X@I:I1(Ji9:!I;)!]:Q{4Nd< 7HNLk/J#N")=BPQ0(j0+SZFxT3tIR*x'uZ#pH`hנr[v}{P1= /PIJM@AQ0C)3:5~5ޖt5끄,fvo|;nT'GU|7Hk?uѸh"D)p% Hwe~P`^aEo?ظА4u{l$3hCC#^MbG[5$ɭ6Im^n>|7ݗ;}l0KmM*vyuYϨ>tԲI,~Mn #'&E6"ѶoiŸ0{TQsMq[C{NDTT-AZaVc ;ޟ{e6Ls^Izi/^pM VEz,#objD3L?yCsXo4/ѯf|qmoY!,Vه&C ۭ>#l]^ M>@Їp4L\ܴš;YH.}<MGn4H=ӋS7%)!b( 3; ⢷r1zBݔbe%_gSSMnu`9*Lۛ&zfsOyAB: $.7ua E?g[zz9N-VEKQb翨~J/}Y/3J5F1Yg0K+[ފrpt]Yy.Q҄ $¿Ƌ6|DdҧӤpK{~6IrGý bjIasZQ@* L͠o/֦п׼TKafmc>+y V$7{TNN޷ j{vOqqb>77S@#)):orVƖ5ǥڻ[jrGuw0?5 px!]ɖ jj^ bA6-&m^DUɛ:0_t46VM{/v r8;ge2|sKELlMGL8qlM SIq1/Dk j 1iybA )!PcҀ|6J\т!uQ C,?OM^KLqe5B"8e )`4ӞHh|D#NƇ8!HqH2+3 ^i- rcKI"˩8RIRP +7–M*h%{roR0 =-l'e˧䇙ܲ\UrvO/Qn{cvOac6*eμ~"j"qvz'h;sv+\:_Nco bRc6(SXx IqO{ BbX.l! V."@ *r̋ʉz$\ ȵ^iJ$|rVkZysQ lrc6=QHЫ-ك)9U|-O+M Ou|nͱCGOB 6Qa*٨/ud@ť(r \F .,2%=%8lYm S-ұLLrKGl.PHah Q{t8#Kjn ݗrQMZw뛫л7]n°7ziL ?>T[ga+߽i{mq8dkG+~itR neojj.& G8J*!!yb[ʋVSϸOT]q/$(v=@y0^fPqlS-Cf0֬9 /XPHBL{g;cHSɥf5'uXa h DbQ% JQB8g`YJ12/C0!B*Caj3jX$"﵌FMFSَ6=cleoeUԕ(Y /ur!bFHQv_c!fc7܊! b <* 9N|0a!rroD5݂r*x}Th,Ci,Yfh)'_ƣ~*ivieɝ UKf9_-#U+J,waRDLFX!ᯀXaSU3 %ZbR>RL@h|oW ^`Wsf"(ܚ1pHΆUl+P](2B£8$e:Kȸ.6v'k_JׄoBa͠tR( ̃ǒ"sJgʚ_nQ@Vz= ~ )&)1}U<%THbH2QyhA5xTLqg0Ȱ$ǝA *mGIMF&їniSlGl7%s_P1Ԗ-j&w%Whq!}p-YAʼ@͇ H$ (De͖HB*+-v4hE{bMIKTu!@<ȏmcg;VF K}A㱈FD"bO|}FDRC5@% IBI(;+O:gZh9JUQ|~$:szq=F 5* :+V4m]~~Xxw=^8YVaF1+Oe=,vp'Ub[NȺ^tXߍkYS\!> Y̻2Uwipzlu6uk.ulϤDZ"bPQJaQ2X_.|wx'>{׿w LUֱ ēYywN,勉̜q5?+: G Z&d#(v6N*xCH cg.R[5wRP/|k#a \j|}O6'SD2@ǧEhLJo 8͑&'bЄC)N#tqr=ܛ`˦dE率.'d|웝Q}1y1UuymV)tE2:U{^NF^MPI9W&PrFBim(y:Ҟ  z>[n=Zφâ!|]9ͥmthƭU^GB0|`QOc=貑RED܎%G.-PfQYS[ebn40 ''#Ib@V2JIhɽGzKApp,Hɑ)RBȑ e`$؜PHu mm"cy//w{3e<7C>zrYYh|?N^.W:R%HVRd^zkq fI( ԃӒ*|+*bh(yqjkC#/i^RKfqQ%O;WzX99{:w뫍C{MQծ+Z,KȪ+VpW:bř]Xq}/V\R+3+ VWȰ`*[C\!}%'-\ ʠrUfذ+dWq(p}p7WbG/:9ҖvaW֮ƭfW⒒=+-\=sҁ۲kX=Ar9 ;.Nsy\_~8옰 `R7 8=|8Cc:J1T1M"a4  Lgn_K|J0'ij1m$g Ż]ߖ^/ nKxco~l?ܗïWߜ=y/X{=cny&c7.A|Kj xqz˜42dPR'_1r9UyxuC; "W~\;oVq'3?~@^J X\5|(/*' i ̠X^9h"-s+Hs%FƆT1&rU6d7hq14m b׆rz Yl[oMNNд@2Ϧi<{7gۥ&R%nxRBH6YZOiJ$c(qقwTĤH!&\hY#kBq@ 6Tn<DaGh8W>G'߬)Կ)G;Z|j;^2|*^.JIHXʨ8@4X@t{82E 9gcQ,],ީMGS/My`А ոYbW̛{־y%0WL Q@rR^Nd?4`+ƖsQrEE#S%9ӊVL 4i('&yp-*:ZI `B02x)"Q A1a=36p*)cNɹ@ /y]|N6QkUѪbR-ru0·]||ߝ3Z?!Ho^kvI$I)@\>>$6f{n ^W<^{i\6!3$`$=D.h)` Z fEYV8^m~'3FyNf6g~VR{>.>_J7|<.mD`H*Po wa< hOCҁKi -"< "dYq -00$E{*V֢@H4pDbj. OQ䜛>DkI#%}\#~g y-7ct!U؁-6z^Y`'Z)nI 2M:_Q椾2&褒R* 1<&I-4<2гzZysuHI( WXrmȹS^ITH0>h$%U< OV׭eB8ze8U CЇ7>+~ߌa8eItZ~ns=VI.Q y| ؠ_nwfYZGRX=0[d.\XTHn5f(~ʎ̫j5nאe|!K's`8<+uv;ν}vh\~%^/+ ||W67)'ٖW~W~4~Mno{,~xۧ_+a{8W2ROɮK wB?-%R#_yM}b7?ك526?'iSP';)?eco5ǥ{fɚ>oݓ Rv&icp.`Xx5ֳtКΰMLx6lb&UٛzW zo.;z֗|>gA&9HSar8u^Kh\E&t""*Y\\4%~ptgA$SD^_}ƍlkny.HCa -Rh_Dj&-}t9m;JRGJ90tvHכǶ7޸%{lO9zyUezą{> > nC.gcV^xM:ceh EVBjKԋF&f;].I=O&%v-nEKwf\D*[AUbc  tL" TQ%$y.2c=Ha Dsͼ E4녥!RӓC$B"<1BItp'壡jo䬇~緟aΌm %B.$}(?B8K#("LDsh%4IFyfN ~^ןϡXTr[$P\.0,.~*(n~'}^aG՟GAf_޸O|%Ԝ^9լ5Ӷ6PL;Wm]CyZ:*1;Jӫ@\\'%%ESԧV*?/F 9HA!imb I:Rujxz#g'H+9`v 'L7y&õC>.ֻwR~W&(y0SV,-0T~_쨲ÀrJ?eE3J,1gp6%gwID; aV{m5Y#Fʥ|Y\T)‘"2@r¡aw}AFZ/1U |{5XVS v6ΓVoSr5L `ꄈ W tx,(7,뷡ArRmDKOpC".!c ;dyl8ݎ:̙P$ZC>ID <2" Kɬ*E0I_x6=TE}}rG7MAEf1sWi0jmlrEKD)4Echn ޢ\b۾]%&b`ZZF0QU-1d@;\<(8PPPsqǺ:8"]6%M?ɮǝ1C=qJF!YRrGR%؁'p+ؽ^4[Cs9(AC3E/YbE_AL jBT &Ԁx>r Y.X9^O.Qn#rS5vɮ‡lg^\?) ϒ \d'؉FߚwP.&Nrq7ŹF\P4~j(ɑUiG ?\xW>0U$|&9XQ@vF ]ZqNy8r \\ز4R)`=#R %mB]qᬩw9L>|p8C+.\DղlZwr7JOTO/&GaV8dGG K La|QuAaLMn]kgj6=o..{ќ/} y=.v9R GgӪ()|r/`R54#t5 a mO8y.>^\=Y9LW=~8FV>dר]Ϛ.Fd/>2>Cz6b6-FtJ)[ѩBx[w6ַ0:GvoN~M|7o(3guz+q30 u IisN,̜A+jj9W|?jLBNcr|Ч[qEl#\֧."4]tM;7_]YoI+?]Rއgv{X!)qL<,ٍY,R$)))vur328HyT(2إ]s-u]@uѮ5 Qc/H;>/^"p_"cz{0ţ*xO H6K4B(U[0.q2`&3+8v\2#dV6̙KYǨp8XQEr qX- [¤pZ:A qK8 1pL;r3hb墉gX IsovbPnYt:Øh_lgKJAU+HX Ax'%g5soff5TyIOr(fTpov71sO)c(5dRGK/7hPVdQþBrvI!seu7zS姿NXes[/Z("Ğjɣ)JW=,ErK0k,^w^G[.q':[_]ݍmkn+pZ-V1EA%,B&x"0QPKZ_zb)TZ.{{ f|L!hb90X9 1>L 2SO/tJÃ&ιt'>4|uh\&Ŵ5U 9\.7>q#ni9@v_fI1T)b+t@HmZ{1[,} *۪p}ӊSIWX__Cq% }_kjl(Qס*&P.f\Pϊb05ﮫږ~6OjKo~x}oMor6wb^Nxu]ԒuP-: ̕qӕFWʰgT/S;!Nű}!wP(Q#k\9 M%4xl_n'?1K=U#BXYL^d"b1)+Cʘak%< cCnjlWT.i>993{99Kp3"5SDc.Sx b΂wDðq$0D>VB_?nX}n ŸrMpʴ !ni÷Y䷫""/mHw0ڛy xlxYxz`.CwTo^*av*9Qd("@r磳q>> {`ՒmP<] ٜY cϛh7jtA-UZc̡eėe۩%<]#AZxLӄdQ &1Au2FsY- & Ἆ{ܝjwKja׃-n~`fh5NgevKY6aV@ /sQ(djJ`)˺df9,bPyU D|!L fN3t^Jh QaS:yJ{bXDMZSLמE/P?CUѦ!~i{72 08SO huj|N>7hڴE_쪧o4cB!tBSRNTr)*r칔.wK1tv:N`;ltv:N`-Y P~#_gS~ EnE3HpǍ6sLb^@eI+]s,Nz^㑅=Z6i(ocif65KCa<{A,}d usx_X//S$YjS7𻟎!ܖܠgptNT̞[< GܨH3kLzS )ޤ6mO!70`Qy )LLmvs¦6¦XiaŶpQA,  نiÆENg%Ӂ,(B# ǯŢj-O@.h"%sYD7ws2x}Jdb܅F<]6}G~kPosF:'GS{FLPSj;y/bl 6oSc:*32Q㺊$~͗GMuO%e@DGyv;ʬVE3 +C42O9&! ļ%A>iɢlU2]e0}"3.جX&"8cQꨍX`0PnuzrY)86czY^@qL/~ShcEvV/=aş@yovF\x}I wO꼩MBfސ3|5_ީR{ Kͻa o.Z 'pl )5rS{+%R.F֝50&Uf,_ű:<ױevfϵ 6VDpZ^%QIʭ1j[,?7+WnXN0lTJhѰ9\9$1 >pBdB}%$`=!Gm7+PF/0+93TnΘ;9c>fB="ldq-em1^Mɟ(M?Ɠoc3#P%s!XE(Y!8 m!̛p*LC'ZK(Il yId;]i+(}]1s[خ,)(9W Mru8<H,\dK3=<3t$LEt!T׻]bqk.XΤXv`7 4"˝oq5rA9l·.ãFQ =BAUE.M_0ُ!NuMO>a3In h'Zi{Jsdǝȱ9vvPt4j4BHCb !U NH4d?D!xD:*Eup{Bײ̈́]얦pk"7VOFld.V5i7UQ\KoflW-q6o:f7Mﭖ/wQtr07|B[|Đkh|Kބqq]_Rur,rW77~o֦ޭ~?dQ.{|s%\r1Þ;ꢋ YLiL3(Kǖ'']ΏIG䤋pґdVf@ap! ZMUҠ8RIRPgu4D{RQ*: ZDl h^CJfVWlwv痓%[+VW 6s*\| =L0:.DBex^rhbXRGO=XMenֹVH c =nG DKqH5;8;NneJ"7A)-~1!TD0e( "4zyǑCt6%mcʩ(8v0)&8 'c9-fPd<5{DH+ԮL f㫩k.%ņmzjakN ,<4RkϤUV:sYnF EU\t+HXG"\JlB \ \cXjǬQ,Qʃp-Z/5^#ubgH AzM9M 8)k$ #<(!)f$i#Xd%3RulB5  mB8. åTpO Kd*["|lHսDM7{,|;U)sz;& 9)Ks;XHdaDQX@:aA;Iݭ"TǦ×=;}qkB[)!ԇdS7?ȹ̥]\ > r6]XtfG5?^vWO~.=:::D!vBTLD.ӧR .Q+رKT֫⾚qih_TeuұL))/s#-f*Dp @39H_:J|q³1)Yå Q B+O(PL99܋胊ܡH;ʚA o>W98k`˵,:ڃSx4M˛>u ;%Ja@Fbܼ?yݭ}sSKG*#R"%1,PaQiƝV2u (B`@$G XmfOE(B{nVC V{. 9)iaQSc9rh D\@XYjEp @Y0ѻ84(p)f.h* uH*.% s%NdX@ xyIVgwُaYGcgFic\0F`>շfgЋe)Jr9^2{g |E9Uԃ 4A|rP2Y QR G ŏSD4RLfҎ>fs00_%P0 *:m1kB*ƥCRkrg{+y$(D>j|f`>s61)(cŅVglZ].ۿ=4=PiVC߯`xn]è46ژogi'~-CTUO~*<[| FRKiX]5TɊqU23BC'{b.!(M]X^PK3f1by9ѳu|TrM65%s{}ԑ1xRWcl\mǭF1& ƆE8s;o}O//ϯ~z˷˟Š <8XM$p`~kOl=3t+c\N-o_>=-J;\P[nݥõ=N|:˯ U|8ܢirkB~|k{(U߲kX5r 1hӯr fuҗ&D7ib#}~}/Ҟ}R^_ :O }$.^i" Q:p a\(e!(MgVpdFHlXԞ?1p셯9  - [ΤpZ:!qTIp 1pLt3 ~J&v,tPߟ|2փ^Ƹ,d % ?4LV0*ܺl`9ϧQG1# $g\Kr1;r P`*|.dP7Ҡې(.KG0>hce׿ \9Uo^Ǣ$!l-H#ŏf84 g+%[/J_tlVˆ*Uxd!R&R/SQw1тT]рG!eLDް)`f:6a&)lylnZ/tw~T_\|b]H9,R !"=AJ!1 I DpC@'%0lB ˉK{`ËFAe+P)@I#vt9ru_z~`\FO~"X9RN~c?d?ٗZWO;Jҗ̉J_|{>^%{ĔZ|pim.hysȵ!,hPPJ`u1*<@ٰiZi˹Ha}XgUBYr%5($NjK~~~'aDh3;Vwm̸;нeaS&QTXNx˨j[mnAbO+|*}RK3910 f1gra.9+ Ryb\ƣW6-W2:EUmzȪEWg{P*Z2oKO_Cz$~ 'IQN)ehky#17q"/<5q5hGkY\zqKJT(C|6ҋotxd;y6if|r֬Opy!C|XhE DڔxmQ)O[`)a6+1~,Aq¶aeW1O i3@18gxKo9o wEkJ:x'˘AMSk j\= R;[ShxVVꄌճ]:mt:l+#^FBmiq3sgí\ 7JQ]?]D=:_k5ѯ kZR01oK/U}rEӝ];p7ϳv!w΁7b{aA wbA~xh;:% ?)B!:[0&H =F.+aoDGN0 ̅JfbJoHD n_)_@j{ ܕYJ]t $B'\ f|KaNĖEsT6SҥV/']Ejt5SO<{sFi<0-ʶ z}GPMnF8֐ڝ0كu=`R6MtD+r1XrmݧZ<|m:V'G ̧D.姒3|32W`O\̲#7WZzŕ1W%1dbś~BōٳZ}Cb ?5>իfsY|?'<݁`1R >n;m6#W8~jz3}/* e=̴άWb|8~/ٞLjA76TaLT+X\D>.,50ZE߁=8kH[~h,#jWaU$|'mqWR1jy? =sv/mF\2>L=[A`I<%>-mdIO$߷l]dVU:,༲,pe ȃRn)}U@ėc'r] Q+ɹV!w$OU]Jj~) eJT2ԊP\ g$`MňDQs%Ur&AX+~q%)ֱ ձWY z=~nFT? Lx"wO~W Z㍴+ )E> 0l3A.)Fh#΅.gT\[+sc80]lJNW7X9x*~pz{74PNJG!#*Ƅ!rÈ5L cu:VP&e92V;؁*GDnŚT4: E@zMb0e\ T[%L?XEDjPg\ EuvFs-Tj8~X~esh(OV\boSb=߾N-D=:q ‹pF+m#AE.hJ`)ki Ֆʪ)YkPֺBYⅅ^YJg;,_RUV\V*=С* ʅ]JnTv3xK( X1F: lp6HFxRz\Tkni/.vZ{J7ST\ @} SY|7vq &=틓f..d Yo=vB/m#dO޲o RM}97o@D(t{YWzn_eIF3{Z_3𭜷b-ֺJ5-: tUHknRnV^?f[!rFAn `hn)1Ho>` c?S.˺5oaR\#ӒY&F86I^DTHlEFt?19\ G9ٵЎ OcṾsc[W~< !92bey!-J ('чD.-W2o#[w'ze?dS+ ҁ6*7s(vNYo[T ܑee==*JZwY!9緺T5+KUO!_Ѽ?O&&U&a>޲6NBNjW+U7뾼<9JM"4پVTd'>lD9DŽRn&x92J#"( -JD6%RHDcl I 980_muzaYfeњNc^R q59GPΉ9C-;Ell3&Åq!Yn4בPlP&6 5Uie1 )J`/ojxwbS?z]s q8[7-Bgʦ{#;0h Qzfĵ<3rG噐:KJHe V0@0DS ,R@(hzCðd`8չjf,_ *)k2h=2h-nTָ FzRZO5_-]I.{C~6Nq' P|zAZ/8aqRy_<$UmӔ^ vLBκ񮡨RWgZoos7.9AU'~5Tڼ7tmv[qLԸ"=e̓~fM13ƀDD֏ֺug6sK~~٦@{Ė(f*9bc%Wh *er$'L:Cy'ZK|:1Jӈ}|f^E*jh@>12^ujcdXwE6b4RCQrMJMV5" t.pOWژ3ajK1L"K (f7 ^V]p ©Im4FE)wO9gNvn  `?ʘ*"G2L^D VKT Cż(>a䐤X¤q2(շQ q`8-RLp7a)%Л"g3'ij^<VȔ;}zXXWKX8 qТmZe⯫J_g9H*7I|ތLf0@W՟(%|%{ö8}8B?^gz ]&oae5% +^ZTrVs.JDbA0n+/}#W܂(y s-:2VGG 5rfya,Ӝ! +]B IUp)R3cĩNWL#|NKG[MŬgWrU@⼾t3i%mzڥ$e{5βo xX7ܼ5oVzq#f=Ғ\{ 5df񟎫tT6a>l÷f!aIpŨ8K9{4[aSr`'g$qnzX(g>49-r͙A6ƒE ,-lccakDJܫ;R F( Tf ҋx4:# B s QmGs:2,"1rbr$lTlQ q־x}D'd7O6ƗKd7JG$˩S>g\!s7B2mab?Xn~vUމQY"Z#E Vc4JPChRli x$ )2LwZ D佖豉hj4BZ"e@Z}FPG<_O%GgZ82$- 7 b[H", ERCc8&g]]*$g>2atRpc5O}J肔2=R:ÃـGNR$F:*QQ$߂hSC!Q)'ShtcS<,* WNJ¹Vί%pc|mnapn-R)/χeP2)b|-QquK|,JRx4e%Y>o(*%H%+ $ 2DQ ʡx"@2A(J!$HZ( D*$ 4PV )*W nT9k~iLWi mP6?U_fxIrau,OҞG$T@1: _$@YSHT4Ga;L.{6a L"}IvDDnd1tӧM0b ,ڍqǾP`Wq,'Bk 郳h *E1$ъdH1LIT l$(ih !hE{bMDKT Bx406FxXXmQ5CWz|}FDeo%|ķtF4:{gqwgj,ͩofWM/?$-/.MNW0:E>Ew>EG>E>E[ }Q9Gbt0L8,IΏS,:4{u$ &PoXh.P፧:&5. "8O"AYj,)r(AQ>3>;ک [|ڶٵU\SfyJϣYMXbU}ST.*1yʼV3褐eD-*^0"ke/*J-i4Š|?T[|{T!2(@d2߳QB}0>Zoa4v}8'S4Wڠ\},nivQ]krQV?zr|<͢bv1<٫ ef> Αb0z9*%AMi| Y֓K{Ζu˻ti76'ԍϏA8?{_?>QПͧw?qU!Mopwb)Ole ecA?9e .9KgEG8#;OX6->bZLȏo~]E4457iF׆49 ӯirK~%*~Yk|1r:xIeWJ\h_njHtgH.%RBHN,8\Hm=w\<k^(n%Fzjg\JzGþr)"L j`y$@3Ns/j!Ҕ%;iLgǗ{˨)2wt;E"=Ɂ҈[;U]pEы;LE3JnrcśVc[x).@/޿kq1ׯ+\8 -)+(zڔdPBUh }{4@yZ]Pr=/秩(&*Ε "P%x+nEzdiLufhٖ:*'pX-ヅG;1;(J{b /vZ5p!&jib.*/ Adk3tP hPy]F@uN|&MRIUBXB7V\~ߛ*ew~HJDRke oMj-$$iKjLZci֥zPMp AZW\EWZc \@bU&X'qi&״&q)R+;\e*UC<8HTvwv6TWF&W_ʩߕiJJŬ˩fOo? \ Wfя;J\ɗ)STIΰ[ 7qJLǓDǿWuuȍ.촪 HoQOwKN$~ߣwսx5G qeDdS39W▆X%/h:?z^M?,m初qf % > jḄ TOrM8=߆ -ouAG5r,Y]8:$m"h+l*fkI6yjHqerYkTL}W21U5q"8\Nx[*S {_#S:K+ 6iWH5p*pԝ%•%$o\!Roխ9fT9EmW/tu"Bk\eri \ejwB*9Q\@2Lh-[WI]!Za]!Lup^sup1,zfڈ\{\mD" {Wz\mJ iDI$@BAz(4ri^ ƭ sO#;nx54=d:W5TA"K6px W  x4<ӁTiʚK-saʣp"6;zKC>>'D8*ԩc2v;-F's4I2MXH&?߿bNre)cؐ|i:XVepϏ n/s#dn ɟHkai)O%ĔN8S&<~* P*?fׁrT rd\ukf~ǡ\=nD|bSLh}w-d*uZn@4i \!ô2R;\!tp MwКLn{*S\Q|QWݾ}/Ю'-}ѐ/=I nݾ7=)Qb}6 F%E]/y'sӥ?P8\h=vI_|[|XJ@RLR qZy-Ֆ>HE p㩦1[@]}*-0F'TE0cNud wVIm F;5h" IưDejGe*/: Y<>+}pp.(#Tу.r%ak>)s}ՌVlWfUaiôFʒGē$` E:] *%%;fJz2hB-629LNy6);>{Vy5*us?[y/'Dv5_{RdonwCi/;kUAiϚ]ioI+ ty1v1Ӎ0"򐸦I6IY7HJKrReLFeEFkGj

nMUgp\gt*MO2A \:1q̎0n i$ФʁA\M3x#zx2t]{*]tMܫ{A ~N+AGVzYn|(o7|| +ʔ"4c5*lP h g+2߱xS87Y}d &$LO Ejk -9X( lBU_<8IWxNh=5Uɸ5wth{ȻMŇxz,f̀K@%TdN DEkH-rL)CQWM*w^ܝih]m|Zkcn+وb[~v4Fs hqcaMJ7rݸ$m#.:%evǺݱ3sRr![ b>G# `KL1zC'@i0" ^#JRCRdYkz7;R:8r --X&smXM,tf_Xq(oqstgpZϡCAAL2U["ug?囯ìWF40Zs@ƴHw t~Og }^: &DRF{ d襓<rbK0M-]%.(g r~ 9%}ϖv?۠_[{/~~F.C0F 0dyP>HDp)jcvQ1(z""]^qg:pÐ-67YJ% Vg ܱiRB&DQ: >AW[ev; gl:94%$OtvP,|HDW<5< Lf e .1_|%nܤHw:c5z浔垔S>wMfW3=[MOgy^fLZ$,$ki "IY&Y*uK d8ޞrYoiiƳiC}Tg \p5+NYX&ɺ-CG P>j3>R #^&.B3i͸cP)(W02\d6JDiPU:ԙǛvDzd>sJ7.xVӫ\;\ߍ2-%n~ g7!0_DV,KWПS7^_\0/p4yO_: c\ GÒ'lO[d/>f0o{]: cz{JHs? 2] /Eo-wxǻF7SSY˿^ϋൿn\@Z4+n08H8E-^Oa(D'_6L8&]XNҚ,?`)wєYJS/TC3ʙLOKZ?>| ~[[&0*UDwg>!^T=8ŅLxniѦ&:=Lw ̔ՑEX0-JStOmwKؙhxd46?1@V6ΨMs /wdYfb[>ߛěܯUڻ8vd4d0Ž=:eb:%8jf!MP7񃹬R(:BoPohPҫr?T=sK/q(KDփ8f(FQDs:g^rG9yӼ_[oi/#8׌'rxm^;s"cC?^? |N:YF%Kz$4Bv1`~TsGwϭSIa[(MFRY9ACI>gpF!aJXަ)c)ET\b Rb6s#%xƢ0a5q8dT\OrhtY[0(Ԟ4Z1yp*r6Fch.%0 Sd 74Ig'UL UVH-#2:̅ ?QSyYd&˹a'O=¨*a|"/%L92EǷNAv7JS{rT-^)Uޟł $X(:E$O Q"Dm͢в o#EeVܢ;, D2˲LY[r* D0VD DP,2N6]TvWpRemj/=]/VûbrhEf zMߐ7y]UJf޺bnjAwk-]r>LF-t8|OvWJ9}]?>\e[&;he}/lg/0̫0̭s%ȕZ^peL@{FB@K04ESd ;~$nΜ#H9 άBcЄ&$T4,"K Kˊ.2!G}#wT8䉗R7:Hꈁ9 F\{/8ds"O<38^YgwXs :>g:hs}(̉(m#+D~W;^.׻IF\=w(![eK҆v?ܝpwvR= qwgM ihJѐ# 4E!ʂ+.TD-Nf0H(hlb BWY ՂHJ; *:TXZxM4ժl!(댸S zU F>2)N|4K$YhAq!BDzѤ$W_+^ul}JwY~е'ua R<`PS8sÄE?Q; yx@!Z DnC&"u{ Ѯ#2(@)k1%:PJ$F8zd.'.Fͦ8v GG+RvFnV4bX*aˢ7x-ڄJ_A$V(K&9!Z.W S0VqV)dh3Pr}4LANvHd E& /0^)OIP] 0^/l'T67F\|B3ɛO:>~{#Nw՟:2y;Hc9;?G&G+C  Sq柯3; 8b6㤧 c='z27B`Ѓ8 e ILRj3@   ;U1`g'x_z)y/L>$ZRUxDgc('Jp n'I8&}=Cfq>ѩKvaK)(ſ,t|<6J>h!>&=SfzqCtj ׉Tb|4һQgitV=Bk2Dkul~E0bl9sX g2Fp$?Ϧ&(~{NȪ^7t +i kTp(U|4|Y dg?ꗣ^/U9i $ I GcMb6*DGЃlTm}7ϵF-JUcau;3oN^᧗OO'_>O^럞 4z\z qo_l}M2>y0ПT_iXx /:Sp: ~*dyE=ڀψ[t4_kCvs1X_W^WhPRI#KFx(_ueI~?.[H\J77Tr˜f !hb e;) ŭ;ذ Ԟ?1paHF<(8[,DgRz<x!.3r&F(ttvMgδ+زI.lϗc#;ͪn~HJ4Ť=2+Ie 6 9ŠEzВr./h9g$F {PnE Kk'_#d4W|mMܹhaNj!qFgm:p!&P0$E,x -ZEiqUV%ɤW]7g1[G bG`G٧8̦\Z ?!XZ,kWWUS_pC;hpKYi:jqV T&jt2z)­KigRˤR5CF֔5KpOm(kПueJNfOҠع88lB86;I\|>|?g( >7ֳZǍ2cv螴ZutqG>Ki[oBm i0|kBm .gm iECjCtj{6~*UUk 1m QR]}5te֜z#\GI<|]r梜qg/diIGq hfD vLXWo^|GрsF䀿c-+`Z.{k;MoHg n#U^hz@@쇦7C4m:v05tJJh ?tJa`]=AZ3&*W:]%]=Ib-+T{׈ JI;ztōPF65tp mV<]% NfmJp,ڽ**Ժ'HW0BBk*2>thtPS+M^Uc]%5vDC %)ҕ"4i]!`٢ n{62$+DttU5|G _|t9}:U{6G8бw*QIW]m5xA/'dpQЌÍ7|oMçM S)}{ŪC:([:hw ~{6q~a^|6@=/^0?M}pko@A3$2g!`UʪYH.|_\)2?j^VC-@vλ -P -UBUBihGWO-+,i ]%LZ%$'HW.Y G+Jpek*5 QJ3?I1#I(h7~9t4LR k<+k9'ey;>B9:Ϊd):+ ; }W?~"5Zq&8oFdpab"-~3Nt~/ l~~]{gGnų2SJB-k T ^\**9*eG&loV>y^Qq?b[P4 }A#bx9M[#YOj-f s6{b_2>gqblu튪J]1 ѵl%iϖj C7FT{:D mg)&=UB{q?IR>VО@X۞ O4WشD Z+" &tt`#ZCWWAk>3&:]%Bwt9#P8+d zLXNV(?%g!q޵}͚܃eaJɷe ;g= ǩJcҁ1sTn53EQTTɱeK 7+' e -n]Օ)o]=ox{Iw;B)`oFDXn^ e;1^SDlB՜Lg1I{\+ˆbS{~~>:>:NIZk@1ΥkM$d!.bLk|RB ud . en'1_y&g yIa@tF_gHC3KSz[~3o (qЦsٚ¡e{.{5D-A@Fg! RStgч|B NKZ^FR+&_p~bLgJ{ɑ_ч!mƢgzi ؙBGXNr.`Se[)rʖՇ  x F s_Tp|5T45˵"ӎEрszA_͒6 &A# ٩X-]3W.L. pN爅}F%;bM;j> uPfT4䭁1Ot^P%N@c"DJo^aVxZi LȐyQ![Œ7FMl}C*HN5{=lrr}Ƕl",Z+fQЎӄt C"ZF9-2OLg,gL H6\&";R@4jn$ɓ`7bԳEM-)TM5[\{u%[Ez`vqƋ 'L;/4a G*ir3yShyF.޴cO2L" ~u]m3v| |{ M06q CE?J8!Q LJV>xI8}sQ#w#} s"[#aYbV&x=d @ 1j#􄆂'e>9;ҿ|guk[gmgU\+Z3gD.|B.Ġc.tR^_= g5WH_59WW;OwW5Swc&'w岷7#\ Dy W s}uI !|aNU֋*xHP-(NtN;']Igi&$ф$T2qFb1H`߻.{%LUt[t: JޅD"ڃ'#I/s߇[}AF]'}˸-x=XMW So|LUa}3M bx𪖖w̻yX5<$i}yXĜ0WqV2Ō0<9 =xk!Jɷ6r!1K(zVzWw/[/`)k >VdnnwW݋  ma@&ېgLCpZr'J61M>gȠ n[ti@n—I1}qr6 s#ZD*r) s@Fަ>x%y{gEOF˅}Y#:/u` ,.`|"&p@i(֤,%bф8$|N:\ۭtT\y%M`,'?$,zTTJ8tNh ]Lj e Hv@$ EzT moP=ɱgXX=:I|3]aeǛnbM0\y5cZhYTEQkDJK7f9sxzl]*R'Lk 2ìFbj|I*=OAw-uh(#KntyO![*:k87"B@MvVBV"8%Kr<ź'v/,3[1gbLB DqP7Q涎midY4sbR8ZP"hc":JtR]Hۤ Krі], '<^F#&NIgt*(ۑPO(qbO 5|<+|G_q2R^;* iŷN*EO7f> ~σS_'3xR^ K򨼘X \ Gi~HAų ^k Ӕg8B0L]20=j&dϏpٳV3@Z aX+X\=~'nq-Y|.Ϲ}44B-;u| D;mZbfU4[=Q)jr^㮹#+1%vEiwF 04+=/RT?M>jr:20[!nͰΣtl%rdP&-o1#ƺZR%P[:jFt7cyg33-ڄO4e.>Gn':g;'VjliHB|_ƗT}`l#>OYxT)ƧuTI'~V6qxqJ'2Gߗ?\v?L0wԋEis,mݜa\vh4:}7lbx;rᅂ8L,8YA'8 AU5,`X _?XW޼iia35ؤ]g;v5E<&ĊOYؑn|dᩭ+r GB.khY>mt>HC0):Ʋ2W\!@i#FI;!xT#607.Uz#ﯣRL%RlI!xUJv@ etz9ipru7X҉-.O~P=w۞فrg΃N&a;R ua;IICR5)n6EVA@jry1}GHTU(*,U(+t:GXpKώ;m`d$6yȃZ#c<ˀ-&H1[ *gGw{K R,«/8WrA OL\ 'l`p4ȭ>ջ~:7g/EUӤjnD8y ŦVlnHFJIϕ4R/e>;_[ݝeU#_ceЏ&-U}s8+qMpJX9eR1fpjG;~KՈ*E%홖Nr͆Ur֮p> pZ,°L ӱWbngW%X>.hì5gq|#Kf; _5?Rf98O Y 5Fϭ'}1iZ!6Ġ$fB1%0YI0 ]io#Ir+lKWA`c{{Cwg!Y,QA%*(2Y23UDfdECqHhe#9 "YZVVUgUV P6ems}ю-i|'NIGn,뺟G]Bt(oZldg!\A;7-y2JIi'7+fnyӬ?aLu[Wi'.y mY A -p+2X_R% >(O0&HHx0,E+AZ oPTRtޙ& m/|M/E|qeg l_}#t3Ú\?A߾//o-Gw@thXIeQx%F86KDr&PϜTR7 &:V[\.i^Bc*{ǹXvaYg/hP}U~Ev09#Ţ}+ڌMN e)8Hޅ!y$gl`Bީ| [ݍm_}l*3Ta#goxޱ͇moiIN ݘ[|J(1ۅ&R"Z^")|r荭CܵZ̍Z>gO*]VZ|J(DI@VI`@4yes0bK%k$ 9q>mWGh0*>hMηvWwxjg'!SLe(N)AAHYNR1>孬KM%s|4ܰbo7طs$S)iM,^s_s ENB^ [x/B#WetЂk8!ihicQALq44t*c&#*&"`% 6j#r8빱1TR$A%RxA3ϔTm B" "?dx@땩yM#c\'9Jӗ %HFV Á>$Odlu&; S.<ۥ1x:1Nڳ($a.L#RFED$΃4 7cK_n +Ë rtNz. !@5%`<ȼfZ4$Ty/<&3“0BJ{bĐ %TzEKh^AjB&Z 4=N\p=-i>2@KޟW/wJ_݂?-t-3OS|& he8t$&z2&Xd9`/gs& LSVB)#%QG`L"fIR:Qh`|. xT=4`c2=!Gfk(Q&§ `R䚒h^kymT2ӜBĽߧ7M-:фᐍ]jEM{v# 6_dCR$jtAOqiP&>/Gr2DɃGI tY=@d4͓{WU},(C#O}=^.˂8!d/UZtV< _lxago˔SӃs;F ِJsg )H3`Gto:NB'I΅藥 F""|8q9|3|{3ia?<JLQbcn%&8x_˒b0$h|:\^$!AB+Zh0˽!_'x8rc˝L/r͖OL_aoYDx273sR$a _VFRէK{dm Z$޶<eY>f6^'1G7^PF2Ie~.krv\x0 h)% r4y/#^q~%e& HsCFW)q/QF2mf;Õu/ĪYu[{8ª.{>U]AU-7b W}/?aRQ83h=FˢAȢqATLSB*^ ;eywAqw!?l}mk.2RO/ro/nx,pCȪ'Ɣ,<b{],ٰlq͛mW` jĐ)hrU{Uv:aVW̏JLDv7r-;.l,dM~;]9rBUĩ9Z@[f8QD )IdQOS*M#틖EɗUZ'@~[]ƍpr1;g.fe.: (n.l=Wu uדG:M$C53` p1] } ]!`MXg*e)ٚnCm)]eS+&Ct % uƺhA2J%z:ARg0V ب3UF+tQJ ҕ֊.YW8]+D˘h;]e)ҕ1Tk!BFU;CW0Bt*p{:]= ^ۙɖ9;(pfc']pv:YmCCNy5W'dGsS} JD 928SJH* o?#.qQR`a'Mw=eU3ͳA'`:>3r#Yvx1ǫ_UH^)>Oa;vq9FZOU:d?'wP@lc BxQ,qѯKk }ȷ,Z &>?հͫ| _QZ+Q e!ןOIL"YRl M'm dP\Xd$F(B4LX8` !$?⁸eHQhTBxb s^0>h9,2XJkH.BQi)s@e"15eP 03K wiZҰmiJ-[Z`5Xةgk;DWPp9 ]eNW͞5PB ;tJv( JPLAMw BWh=]etutTg8]!`qzp ]eƴ%lFtu:t%҄wtՃ;CWju%&DtutZ]r0Sݱ2gl-g(tutO;DWpAt>p(+4]+̟a+)pYg3ZmRɞ^ ]=Ny ֮6ǎW{gZ`ǡz(wkSl Tdq"$ۂmxX:>zZpewX:gRҞODf;Feˠ+t+ʍNRuҴ3tskh2JMz:A ~` 3t2h%o;]eZtut Atݱ2\0]ִ޺BN$p   ;tњ[WugzzRR  XwpMgv-0v(EOWHWZ*Ct%3;ZENWND*6ݱ$+trv(VNJ9BEUѱ Z)NVZ5;vzpőwDUG*RZi,jдiS #.'U3:sx99۳سRɪ_5J%-/'UkfA~f[]tʍ~:;T8,k+mkoKPuc] Xθ;e.vFybrA!x\ ]T*):EWX]=t2ZhFFDOW@W<~֎, vfjje4#Ů3U?ݻJ(a4(c^jm*KVg+BnM5|Wr߿ʿxzM^_>e]zJ֍+,=Xf`?(^T6^=O765(l~b٢1B* H$(OG =l !cmdQE=$nTb7b"9ysIUFK{q҃cFgdɺ5}P9Λ{!HUEzr7NTRZ  $Y1ڢDw$ףI YπKoaL>,QڎMѵ(%b'ԴIE[RHHH I? ҋHFVh+\.`5 j%6 Bb1C6WKr5j|NQbѥ`D)Ȃ>4Jt)*k_c37% >{(.ս` :`ԞBv=5ڐ]jy;aJ$|[(_ 6XS " ^TTlPtA[wZ 4@shRsW*A5V-Tbպ*u%,d.Zn kBhcnu+yXLĵd Hqk fCFƺzzEK05n06`HJƬC6,hcPVPB ׎]SPP&V*¤T) `; n!\"U0VjVa   ʄWs42 d>70&d]g(M DBΦ^RȠU %;.j!ՠPwVrEdܠQ { @rLBB("2m ؍ttg-JC(]eԭ9+r9C9 qN0tAC\)L1R"%8TR 3k'T b?XK9(q3XGfұK!\+B4ަ2ݙPHq Ls$e丽`Q g (Ez@ߑPIWm (ĩur{RQv J H/[.4f^n!GAU3_MrW"ZJ( er1! aUh#ǻ=sA ԙtD0(W~FE{ۋq)bMDr|T1&PT1yQHa:IB1M`Y;vۙxa(UhYsHgď7*ՂYe]w ڦL[oT^MJ)h+JUe ,äc*Jrz_fT(39՜ HDNW2^i C.kn;"($'Mm2X4kno!g0Eu7(t 0j`R;JQYTPcD&7cQGQ1BS01RuL` t@ QYj0Q3j hNmS; fnfR;5֨Y 6)jPmtϤyfj)څ'е=Dw *UPxZ*;k*އY[Tڠ@~Q'X©rltEiaZ3 _B\čtro*娵BSp*qKچkCWQ[1xx@A4Ds7j̦rz4D X%d,.1b#)6fRA&iG_Z~X04ƵL? g]VOoBM|ߋ WzZ]./^HOyh~AV^bIWZ/w/c:w>Y/n5Çж}[~Lgh㺭z?II=9LA?I?Wrn>Uʹ)9* qqW tN #Pv@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; d$L Lfך8?g( tN h N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:^'.9gf:N !'@Cβ@N  N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:^'s)9M d@@;2 v(xh|v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; zP/nɫ jJ^oo?]Z;YY\_arJƥq pq hM+'tٗ'wj9B}_hɋdeA&I!vvp DrmyiE':ZF 5VO~X˄kg+ zQ=`@Wr<%Vgu Og_ocg^]ިsgSmhγiY ~9jBR QUMFhdSj@=tNT;Fb02N8T.FT hNW@%1UȧS+T ]Zu a#(nBtE ]Sb*tEh8F(fЕ}S!ӟ^]= jiqh4t8]GЕezՊ'm"@};{P˅Ͼٴްir{E ̄[=j֊B&G3+FN ](Itut2>LW_qS+BաLWHWF9-݄ Gk?]\cBWPc:Rumte37(Tꃿ@(m`:BrZӡ+ JO=xuE(c+: ѕA$*LNWI#`S*H?"*NՇNWR1U46)ਧ=tE(];&:?9]='Q<n֫Ov8#1]=y譑Bn~9H3n.SqfG4|'zWgvuϖŻԣc &L8u/EYb\ʛV~pчjT9 ;cNcp`c_w80vKffcz?'Ұq}6Xz>=) v}A⎷f Dg͖/ %5K=Ѷgn7JϤozyBOAkb}^2?gd >ad•3jGp*u rV :DoPmA֎xA~:eۛ'M>"mt:ߨ[x Z,\-@h9BxeWu+NOnS+eOWLW +T h NW%]w7D>]~2tN5mc(tew~J +T h>]1z#+祟&>d pS+5:]Ӗ+OMgІ9&R:]JKHW!gDW8LgNZ/T*mݔ62 pL1Hh8t"v3?p=9|4EVs, ,jv jF!bq9 '/)0G1'Mdo^za,oZ>oW/bv8/>ov%w_}/O=O` 'LRLmxc~5ľnTWLBOkp*=_r}V?TkUjH!sO}?}A\J&UkmFo l[|? ,Y`;}&XŢ,9j3C+l,,9$fɪ⯊ŪH`s59iϦUї~F|X'oy0{0kt66#~2=4_T(ֲB%R?G.ݰ7!zboF kEgza=ս.ߡ9\_X*Ħh>i붮]M4jktzog"7V+QW݅Nfr}ץW_2 uu5i{Ay]=QߋGZVԲrn=?5B077=dmg%vyD>Уp?{_{u%U5g?k'[ruԦZn{dre;O-*kV'*jE]uq;CqӼ w|Iէwiw5aak۬;a~yڊ_Xm4s6o,7`LIg4#L=/β/ÛN7U)b¢wA&aee?ut= s*٥pVzU8,^tM<$}#;d6_:wKw<2O\%gn/T7_Gѝׯ"W#~HҜX}ڞ{Og~q-lrXp۰.: ~|91d-TL(dx%y,YF$MA(ޢsZr6>zkOʞquB2*( ϞyB͉{ByB yBIxBUH`D hcdQ$ёˊ.YZ1BЪ' |΁E<&wMQH)s&@G0^=:˧yAǷw3z{]7{Rqȍ&ub/뷫q Л} ,S8N.\ajy~s7]%aa`^|sUSíM-' 7Ηs?hT.;\-JW"/Y=`s`;oj %L%Yań!dT F%08%f䆐_ _4+o>5~En*':5;vSF[7;V]ׯݯ?ɻH f @Qhy!ZdVLKmjM96tes=0G2a[Ւ }YG~ v]sl~դ$O”BUpW әn+z搀_S ~jx_SEzsu5E@眎ܐIqP.(,dwY[)BDms`٧8fрG9$rBi49eHNL:o 5 ݰ߃{Bt!~ۦXB/Z&ɩ5n1_!ܚw=d&IB݌lr dJbltтqFdQYI]"PVveR; Hr-͚҃h -H61ɳZ~ AΒr:9fy6ER rXɕ(!JV iiA4B[$&|CB5S*p4XdA[[$zt͜p6@8u6D iQ\5Qr9g;%AR$%9+-" 2p+ H3-HjdNg;!! ptF~ǐ?t][?p3<;HOA0>Pil=TZ_s @iFrsRLW]QZ9!;o.8r\7eIQrdrEK8ih9KFZ@%2zS19<2x7ӫ] dgב[Cte@k9SHڀOΎ~4?R6G "[6Q`}ҵ Y>:ڧdQ.v(-(*spO_{`=OA;!mqQ*xN'd:gUɩrq :h1+kZR9Y~}sȺn/_b#ŗ`7yn5i%]лN~^k,=,Q7E zGz g>P"pa}=]i>J)w>J90^.T>߾w7.c]>qU?-J8'+Xbb~F,*6Cbr֫v֫ws ^{m8dm9lC!BMR ˱,UgԚӋu&I&l Ƚsoae2Y\ ^ęw;$i?Q|HՋR;&LjBH7ԽhXW7Qj[@?JKB+nj< U>A<ӧ}4K,-Nh^ #ICY'\ U9gdB!4`1gJvS)W'a탩s*7荿7_ 2( (g&T91JcWIsnbd)2o"7$$]AT sdLEAVm]PjԠxUy|C[.|MVmnujkvBl\xA 4̇2q[;O`FQ#MQf5X֔6b KVsM蝊Fl?Ոeh7z v!hhUdC9zK3OdI gXE1j%r虘"1hȒ`r&m]#~FHzՔ=)_g[\^Ԭez7qeeI x&aY\>ɐ5LɚBǣOE' kb=cmȭ *@)xG)m?aaPeK\LESXDkRP }m#2b{c*@VhAIȁ!Tʝx RkN"fD%e'>$=Lr*@K"x:29m:Zԣ?MKwwS֯j3<:Rf"{{kdsD (ΉR̭@IRxd(-ADn# H1.s5MH|z" 2Ň,7ڒ%dQ-TzKe1\w[ζlW,5?H1"cTi T 9zuRU cG4&NҘ8g{lJc&)\RL J`dE$$$ @j-N&'tYCnxz\LOx?ꮙ=-:HyYN<&StTR*Nܗ1,X1=<-ϋx2kPXeeMdW\Ky;[j @IUfUxĽV'ŖFx:WY-i  |!{Z8dZ5WřZc'Wkq}:9a2 9A.E< ' N,^j? ROP iB\Cڃ+94 '<^b `$ൕt=̥Q5I;,Va~Ofbb^.;qɯ$إQJvi;u%\+ \Ԏsf;S?Ni5;A'ɋ[7P^?M_:dAM%S;2C θd]\Y#%pocf:ʖz >r?1`9-0%d{0ƭW  ޕ6r#1xnmއC33$d0xڑ%E\AWl.ےul9fl5EUd>l|f`>S—o )Jj"7Hu:/х4+.JB'nR]'2ԬkW6bnF&ϳ>I[T?M-|;]-fw#)Nj2Tvas$ߏGE3PvBChUK\l[^ Y݌+YTh} |Lnt6ѺΩ՝#h}NjuU_ؼa5> \z< Y/f 6N z߻ުcR:ˤvw8s.ޜ~&ۋ_7/0Qg8x[x F`\ʲJ=psnRz}?m׿n'Өb/M9ϤwHC*N0.q2(MgVpdFH0X=_qؓqD%9Aa7,l8iQM](H1L3aGs*toYw=yN—ͪ&lqv$bTkķe3ZlPA0G~ p%XR3|%gS4a7N ԗt$o/7#Ӡn4_&S&^Kk_IH&HToR]w =1gq7ͯFq!#*9 #>G0%46ԁLqGd-(6m06³VhYtٗ[]vy =ٰ ó. l(nޯVۯlRy|cotOI>h<'ۂeO.e`/7cfiOO2Ko< vv z.-{SOS|~E2VZ(ڜkcŤ[/1=;i/V߯渱9nq57<_G3˶q 1LI:4)w~|V`l*gRbrS7>BNwihø{v̸aȴE%<`#8:iRkg"*7ETq.S /gi&c T!\0KW܃#hɠ@q#'OS1G EέRȰS0A['RHDc6beo#BKD'SъQos4qWwj=15oH&fJ> {1~ZlQ3sdP. 00r磳q26MG ΡCei `\GBQA9ڀe8Je0ŀ+@J*RÛ;IT}=]]9}쏦Nڎxךd(=vF3 QSD1V<2ZG7pDH\q 0<0DS& ,ƀPZ3'y+8, `Q~`>jK9Zo;+|iS8K]m][Pb}5 i^6P7`𺍂aqT.!$Rȝ[44)c<Oo0 y/dB Ih$pV1(Ji9:1+`!]J"i TE IzyLA^:TJ0GBƧ* AF 95O|Ni5Qѩn%,b.+GC2j+jϮm BȊ*@<-*A>aa\clB.mGq’ƄziiFpYD`Sa)灄Ʒ])*3Sp?~QFJrs:Gi$N9qgtُ%N!?[צ[erk:2Aݽ ]-, 0k'<^Vgо4u}nXFɆI {\Z~FFAC\"CgvIPN'KPME+[1zZg?> ,ŏ 䗟'1Li#D]ݙ_ ~\QEv1}U'TX53S"bJwuC:SX9 Tpĵ@`R#g0Y]*IȱHIQWOԆSN=:\m'F W[#vҒGjNJvd'j R \;>Zx~-&Iu\;O3Kj7`;z0h"R+#R\̆_.?yY:o%{tP%N=Q0n~y %<@rFϵ,wJÃ&TZs0^'LGtdQ>\qXt{h[ 5{X]ʙR+R3Ey 6;˜of1mfv1G #XBE$u9BڱHS75YB`gc`bҹVk23Їu}ZH+\?vRv[ IJl-<ǭBRFpk&jWI\WIZŏhi%e >P!\%q9 \%i8vJRj3+F)ֺFpkWI\]sfѢ!\q傳]i'x?#0%<,~¯^3{~x*C67Iyz;y)>vs3K:ݛ}*W>$:Hk#tPן' h1Nfxڌ q{! Fuɉ<ˏ>9_lـ)ҙ-1M&uԖ̶W$uNKZ}$uGi:S$%n >Gta*zXM/0^6_ l.^&5 ٧*w 0*C+T|`>9_yZvݶ X`T9F$㑅H>HԔȉ0+DW l|aC{ЋʳX)wKwSE+|U~UKR{L,c0!h"QKiJĨ6xΨjgBFkx6jbVĘl1hgښq_>wlgz_kvR)cUdc㤫/xt|9"rq%C @x%ӰaKw,+T @Ty-ɂw _xk!r1`ߠPY.ʥ/rE`CzH)=%fXk6;(`M[_bder,6KS;zy0_ԋy#0[2QT]c/"+LZ F]cGc5^ОVK%ׅ=p1h09ЃTwsjQVdU}dLiBhIIؚ1soo ACz-#W>JBZhbEWro<ƶ& ڂTv@m"=vs" f_\Uj7qBLY.{ (ݘ(RbR(d€랂KYK[̋YdF 6X'1|gd=b%$(C>;bSP{EQ/mmD"eQMUHWLqXTe}T<׭ů&' NK;o$oI@oC4%&y2}K os\=>MLf7G}amaytR QU_)Aa҂a}t1< UًY ҪM&9 L \A/4i6*HިX`Q|"]DY[3aN5Y+-Ӽ1B3$-A h,^aG4YCfd( kѨɳ$"t YF1b@V?gy[wvV;/5Ft|,78T"%$ Zvc#xMلB:^3'"p ~UH!75ZUI9!#GNzMȚ2^G/ V3՝xyduE%E+z0A.nxYDeg2Dy*R1>ٱwBX"Iy=琋"{# ֹ.:b}_yGccN:JޏFo(w fRPVRPp)j/RPSP9bڃW9њFE51Zu*1fL^;Kl(9 l>* %:/b}*FS(iyud-RQ9[D(Jմ:fwT܅?m'vm^3j1QXUZT>%nt;ZY^=Otu68mͶB.pqn5]RY٣捒r6R@cއcÞwT=@/dc[" Is5U}#?^*m <]Y[m0%):VcJ~S&5dƫE&"$ iJJIMy&55.~ſLj., ϴdJ/#tV FޠV]Ţ m?xv4hLiDzܩ1^]9pEZl\lϏ!ʗH"K@>W,.smO*vw|@0k$Efr,qcQE羗ɤ.3Miq3dyY!l#x dQDB P0I,: YdmJI *?R;\jBB 5n#4 (&Q P5 "6 /#ȒX YJ BJU ?/HlqOYfi&ERJ_Fѿ?k;K =w}yӔ|1Ğj]`8ͼGgqy%hiǩk#H>am?rǓ~ha>X/ Ms{46yu0'XZ q;Ɠ8y|;_U Uz>( ݴ֩/mt[!ҖhϫwO޽;:L pz5^UN@ `=K5gr[UǦ83eѢx5DœﯯΖ|ftk};xZ&gvn7#lUzr'4% -xK]ͨf@v6I,?3fZ3XgjmW݃Sb{Av5֚Ly5ӊGjÖ篳KU_i4+*6NMR)ՖJį/Θ?? >}~=8"Y`퓻;pbVZSO_O_Eϕ{k &[yʺҸnŕyW_3/./0-&o~E@4jZ٣  i79vz]@u.VX k9ҍڃ ]H>z .v^mǁyvKӮ=9@5l$l 0~-c/(-8TGsh_<@9XE!) FE k0qWρg-MS` 1Vg}J2H0Y\%'rBT "ZUh4 6H1#%QId!KP,FO!ES/֝eHEz92 UA`]LJ1`% ~ؖqi``KZ ^g4mJz)cB:B"7.([fc}*| [fPTT_/JoXoګt{_wb9nyf0̽adySTbz6kA?Oϟm Q睤X*#8c4*,Po6.1±YBG-9gNUʮ=yCerYP e jk%*Ř-Utb⬷{y ?- w&f_~b}_zYrƴx@3Դ8!g,q;$Rq]%+cCcMȄPS2UwnYjcسgծ EE%B'ۖi=4?j(`mgq`m}d m+I97{V_p|ØH3+M@W"Z^9l0 .yC>Qq~ă`rSFq&HJJU`LR)M^-c{s P"d~zpB݂a^+pUNXO?a˧P'IHXʩ &2A'wJP )Rsֲb[-bli_4u &Oڵ,j=]y Pܿ qAޖ}A(+V Y Ee#Wת=׆Ma6D"}R G13i2 Q֪|܏ U4)" \J8scc IJ2& w3⩣Fk9/SO9b1qsDDBi$檾U%'?-۸'BQͻ䣥C15Ę8QGKDN}d2TiU h҇!RdsGd9{G6#n|:jW$`:1Nڳ(@piDUިhHy@-LN$Փ sE h:otv9:Ss|;7z3N.mD`H*PopiP 1 "˩J:Cuwu=vqPFuÔ Bױgq}u|E:uF&T] huq0K><,3"[?[Ϭ"5e53279ri!h v~zGN*_ŀYFm{1#P ca/6Bu:?ԧì`_btZjMx?O&!WU'~iiеq ޴mL޷03\y*n\7xClP7;@y}>p7 'l *l- -߈-# g:pfp?cp, <԰7*f/+IYKɁj&s7:ZW[f#6KC"5m DɎeʏb`&Hv,B-Ȯq! &kfwO:W)@M̢rjZWl'k:6a3mzd&zRw]۾\AsWu2ǂ(ЮyE듫Ig:ة&F8Y\)%7qV}M\\&~5+ \eq ;Bi9WYJ -\2~Hյ.Pd0\§MQ>6Y3"+}a `Ȯ];AWv~'h"}> Q]κR1~1^1m yvF`IYvgf[IKxNJ`f 6-L?R j \eq  \},h58\,.5WYZp++ , 92~(pl*Kt W][y,\T32z0pE)u_#\Ikguz&3IVKY &?{w1:񲈝:'d?{ǽyrɡ<,}*oxQ{0n~]_?߻kӘ~no: q<6"\a>f4ܙ+\ l\jf 5.'fͯ3ñ惝Z| 8h608m|%^I}K1o!m+=؛KWRdoa Eû jm(ly0Ÿ͂Mx彧vY_VÔ:L:MryPww@\fP^%PpnYsa8>9wRY}C@E˄i+au" u-*x.9ui$P5 G)hLJȘSD%MA*c=+<S OIU'A̖'A>`;],.?],"Ю{uOREol)׹_f3>;*9mJTPS D&-(awZrkmJo6%۟k1pHNYbV3(zJ!n蜡8#.%![+\PJhc&ijb NP*D,J@mrpkcaGt{}dݼ*Sc!>ri80GZ$}J`}tg_3gDT5(2pQ&Uyph$*oyj-(8RD†ZLUqv|Yr_ҫӓWӫFY,Dx%M 8 WeKq͓aΞ pg69|vDĄ@1/}қYn8 .澠v1Ԇ[J]8zkB΢'+DE+!IǨ*&QY!utIhi ̐ = &DEdM1 /X ģSxXLxX;CQFD"b+W|b@CpQ G| Q:O(rQZK4H:! 3Jo$P%X8@ U` n1yl#S1qM$HWŭ`VXX5Bx {غjځ,esh؃ˡ֣E[ y2כ ;,W71Psu=}{@[]8qwQ%F IKMi2cʡjh;C\ 5UL2  _2u;ԏ(o ;U+_2pu!K-䅠uKo_iA]qkn/Ux!+TބVĜVdS0u!Ǐ[#s ؟={ s/~if[)CJJ)F{嚀Rk:i4RZ\<OU_>Y;un`CFD)٦ARc>b -`1N)d`ͨ@4NZdx@ DI* g MIBǮκAI v5&L[fﻗ&dli%L7ı.6[u-)DSM7Dij.=]%} J M6:`mp#CuV;12q֢#Sf..*݃e bSWI- =\rsz41.uHy4uusBʳ'O l'?FGϣDiG%ecsۡXt3K_2ⶾ̿TIK7nhV"0g b3|R&wקm': Ŕj]v:j"OMݭ ]S<YeH}!w-IhT%d!aURDV+ >$ĕ5#7'Jiu1&gZl  5'i-W+֩]d TR amP @ 2MZ!פ QJkf>9^Yv,}lw8 oFȎ`UpNg^(;{B 3(*2Wf Sfy5cad>뼶>+kr{=;K54 g#! N,g uy5뫋.|r(Ʀ,Dj3!DL L},e)Ҭx^;xcXnYD\ܺr=9Y;.(狃[\F=9jOYOY44ZqOk.(7N3,Ԍ\~En`US<6TX=!Ik( t!G18F`č:,鞻E&}}qxzԖ|F&G?s,Vξ\ +U$9e!HM"ш``y,s$iIZ ]WE$HRDaERҚ YGE0:'R dJYwWZ8L*sՎ5LYgzv3KTO-FwbrԪCg0Y񖞻zk#&F;[#רkm *X&_f2J <ӄTB!!~31l&0HI Z`IgtLBI*bh/2FzlI!ɺL}z>I/g- O:/ F'S^1Ġ%:νT{Y9WM?6FiD"PX|T|vƨ EbP%11EZeً^Ŀߺsp`.K!ͤPܳB)Bה¤ll2Wni`}ف'9'b Njg I9$S|ʛh=/8\N깊>V6_eur(ϛ_t-\.3Z~է&֋#eo6f_ ( sjrGR'5 "ͽJ{E=}DJȓ+qrR$L&2^|=L^ ,-5P!ߟ1dm4=pmH>T ʣi>}Cix>ϜU_޼-y8K> mt7G'z l[UMj~pWוs/?LޮG:㔺e8*ó*;kΑh|qIG dLY`-df[>OFtvZ]KIj;o#>,8[lxr<keD(w$Juz\Y֤ŐD/Q>FKox1pU*'Š*)_M>n(t~~eEv}w˗\(Ft^8W\s_s Dxw{쇻QB5-|L7k rͩw^)I*^RS,d _%炶j'g;/%ݝZV^{Oo 5KBfya˛# / 85UYWolcg95EqsjBe6l6WY(Ltf܎|;E`+_Űe<(ԬhMٿ2}JT5y`cԯ ~2~Muuέ0Y&[JL^gqWxW5#փM/_ LPZrfIYJ63PKW^U ^M_&#q5#GSO}|K6/N˂53ޔګꛦ.+ߓT={}{H5@qdox1XG( Qa7pYrkظ[4 T_xd`m? WmWiI-7kncBɢRUHӞC[7XѺӬ|ha:uG:bR,86Vre < 9HysQ 9P;0Uv-|l3846dr۳e[{ACp] %]̲oK.Qp_U߼>$؂j8;lݳJ h{%nܹ{QnJߦ5G֪~[CVL%ڰ2aZRohxGO tJu .MoA;nɝcO|cFPՑI/o[.CЋu:]毿^,o=lIk&P=Am>ă7Zs%eITe:m<9m3A0ɵ ..PHah Q{qJ|'GVrom۷n۲}%SswC[_)Vq|pTJկm0÷T﹨fQ|X$.rtIJ҆0JoX0F9mf0EN93X?p2ݚ쩜쩝ɜ5\`RJg8D)U JMX^Ԣ,Di%H-(tJz~#m|n)A|'v7w_}&IԞRS:S:7l+7}M/NWEe*,NS\@il~Td,~r[89Y{r(kTTDPp$!3*tF jv YJ1?4J e&#QHYu2LwZ D佖豉hj4BZ"&c|F>! =B΢ϧ\Tsf%`5VpEH" .Fa\^} GyYV XO> lrS{+%Rp>Pͨd@K?nߔ5U+O' ˤh"VrafQ8~8讃o% :9.vn-s[bk /?QTa{*@K Jim 8g2+$bAZ@HԞFQ(֢$brK%B@0D1qvWi ӌ}P4£bTlČVT,6o* W~{b#63E!A:yX2U$:@bSڦXhrfPE#א= JlR1+D{CJGt`ZDK5є8~<nL;NE-ݤ6Ј,w5ƅր%1DQSFQB^ J i`-20Cv4HKy@>r ig;F}ڎ>T1eÈ([DlqE؎t8Hf(p8RH $G" lV!!Ɓ3uS/ gKs`I&aDlL6@W["I:Ғ=qqQ&'i"y쵕q d?^Dc ["iSƴD@6 _}x'͙02q 6Ă9Qbo ̑YNkm#GEosyg1>dٝv,e,ƲHןbbV˲ݲdb5[__!YTkYeGL渣0GuaJZckY՗^0i_Xe/W9E QzBCAxE6Fg2m0Ƕ՗/ ]: voT=߅=~ɍ],՜Յ/@hsե3Q7/(QJ/Yt'T,٬+\vޫÕooQJa<mot3͘Y[yT}: 5/Yf\ Qt[.nʼY}D):|?e\t5\%r'~7 'm\œ֋*x*%TڒG;2́ts@F9#YQ6LɆR ZE`=ˊ>.2A*ڽ.r(aOz-1_մ6qtTsOL12f'HIN`>hbp!kW|ųEO7'ϴn,}2nX͌0J. |${&p$0@YKx/a zg?mM5O:3Q%͵5#XQ0mBp:d,v m-`Q*Kd*x$BU&ܻ5yެxqяٓ_/>5Rr:[nv9^NkZ~_cM-ihlI-75#bDBpT'2e =iޟn4Nͭ.jXb+S:)NҰK՗^QF3ޫsRCk?/Ni9ї_/\/i 4a3ԅYgu;pbRS+ޯ & ޘ^\7tsu/.Bp> Oz8iZ,ƴܛ~u7EM;`v]Cnhw5TyUb#>xc]Sdr#!W틍m4(6:6R(t6c, cpŁ+<8cQ"NȨUlv6,KUKϏCt8z:*ͤeF28-JCƤAgQi2HP*^]jDH@wʙ$M//5D63=dHw U]~vȺL-@% *P*`cԄIvvݖt} .V!J05v)+7R1h׻mZwM~n3< Ey6ȷOx0۠fK\P+l@Ԛm3ƭT{TV`Cp%%1Xݹפa"ch`5q ޕީr+d.f:8#58s]}pO]3Aόt')lIYEHvQp,')f9G-!0 QVh;0sw`Nۗ[ۧ/*QZCjw^ ϗ˿-e+I`{ "i 90B@=9cʦ"9@8NJI0 2*E"{=<,K547 E:V0e7JW WU9u1yբS--m[:uO#qS:cysx2>mr؄}s^I :˵K%>َL2B3 3'rdզhs`z L[n8ar}㖫O\z1_;q`+*]ֺ%YR~1=95͓\'ZhBށ Y0I2j9Q0ܚܮ 9@$ǹq'Fx@ VFzw=[[g9.sS ٫.u&6,s]Y, 0rfNҴ&%P.JJ*Eʻr> 7PvFۈ.[zf폫=b=ۼP;PWvHkOHUOJL.ĥ+Hނ<ĥvI¤ =x6R$Y3PeLX_r:vslN'>g={d;5!o rV UZĽ5>{%4yFK7zrBZ %#8V>'浈Z*sX/🈳 wfKI%Td 5}*MN\ZcJ$eB'| Ko>:!s|󃾘[(7f 1{r[opr\B4(5j|t,$i+J.ua]䡆M B(t$c废p1:mr eo#JR{ΧȲf\Vܐi(uZ`鏰2ƈuaHHd4jUťwzɠz+Z3y|tsOSs&K1N(_o?YgXJ̫ hb3ƴrؤHPfXsZ2&NxFiTO4ϛ|fLZ$ ϲA !XĢÒ}Lei;:tOqYs~H+U 4Y4Dc %< PhgS: zZ4 PCz(j۲뇇;H?-6xyUЃj2³4}j\0ܻBN;Bh0 e1RDt=N?^?%ak6vYKzh>zF;?YV~*' z{ŏo{+NxOh?/ J:{[3s}1ӷyl̰Ʈ.r\1?{WH w))C>؃ֳOvG$DiDjz4wGI:xJJ%LeEUEF~߬EN8ئj ZRa:y [$@/R{/Hix/#7d*tjqK7zVz]oU:rp}|T_#Բ8ed>-dASfcqG>]%Wn/~,k'Kt|JW?N4r~|c^ٵ0|W8:OwcFdMzO̗dK Χ}2dzu}]̭g|DjcM=O7IEr 6͏Tn"tw}LϫM+sn竫!3>{NσL& ϗEni}M,S95fB0l"]5]eӰdÜL %sh>p"yǴEެ@QʆhLng,[d&vOQ\ @ W}~~WN0ʒ#_h]#3{gIlI >^8D$9/ e"ɓO! `>|[Ϗ(xիB}zJ|z}z \W+η $\n6#A=:d n <8bp?1~\MgC]M8$e-02mz;wYO߸wY.keq};+c9J}0q8EZ dsCKŔje7leq5]ָwY.keǀF6F'ѸwY.keq51A̒ˠwYq5]ָZFQԸwY.kq5fwY.keq5ӸwY.keq5]ָwY.keq5]ָwY.keq5]ָwY.keq5]ָwY.keq OΫWBƞ dzs7}wi"hANe]RB4V"h="h\P=;̨`YGC ȍ'bgY,b p.{6Z6pdEE3BB.$Rj<Hz+Cjlp!^^/<b,ާO%{'B,8M=5@+CixC3kfQ<^^5_jw^L.zmV\c̑/b\4غ z_,ipD;S=^J'M_[^.+ Fէ];r}G~}\C(u .zj˖yi zef 6\Ǵ< A;"eB7HҊ0,R CAvzIA7xL1p'aJSN[(L:xoan8veu#NO -PxBTJkYzeyYsm# BuD]Ȍ4"|ސpA9gZ< * "YOYڲY0^{.X$9tg["4xQˇxVy|ͱd gRF͸N$@,LDm" k%="5m.୷AJ@P3ˆdɱ*jl |AJJɖ36T TgtAR<4@)e;54;d;3&}JV£gjֱrslБB#&&WGژ)wu鋮Ql  yF;e"D%AP[PWjkz`=)BG 8t\`LQ;N%tTɪRM 9kM]qdi^j2q\} R.5 .t34왥h¾h"ihdխݛW4(c5R|q͕C] VI@}X옎QɡZ!f{y1{^Teě_eߠA=gίE&7Y\ xS. :{Б{ SRR7ځ##bR"('OeU_ӁG<&B]V\gRǕB.2KəeN6A8[&5cY}05LpG;kVtJF1cG Ik}ŨDZpU7A1ӁkZEl%Ћ#@Α֒3 cjyM ~cg^ u5=ٵzG0*S2Y}=Qi_/+Էݞ,T{ŭj`sVѨbb!25akI"Z !t@|g9c(.ke2JbgB F͍D#yB Wf6%f VbdG].6NENج O.;/4a G49QD)<#]܇]=cEL}Xw)^{hA<?`ԥ[$~2~@ۥLNvWpw(IZ߿ZTJdm' Y"DPtdyːO閕l5pbg18bg5 bg5l4 &3kò &xQ'-/BYkeAY@^(c{isW{nz o?uÃkEF'jw߶~2;9"1%]]OI;ݟF.xa1c"a'\<~u7O>wVyiz}Z3<y*Oa9U ͿC^Y^5m㗧]4{Q39 o*{n-ADM VURe%&R%«7Ow+ "|8hP]18BKδ`,'$h=QQI6CdV }Pִhz_xmy4MU8ֈ%W7HGKe]JI2Ӛymxbvm)R(Fa ]-W^eDgMAj қ{Q B׮ gO`ˋuOv/,3[ x ̚@[IR$Kmp\j 6NcTs!Qs8E ǜftR݅K"YB4b6-o"Y@Nx#&\:`WT Y.]\? }YJ:Jcǿ %кOXd0Ü~Fg~N~\K8_ͮtv)82_v79m#[+Dpn=ۦb]`bVTIv.gH=hE)8[Ե!w3s ~7 ]5q&ƷtwVS,`0={1JhiѳuƩݍ#ZU뮶djZgyk$Vauٸ볗bLx9z`T}W@G^oǟxW0Q^|ݫa\U-^[$~@|!#H[UC}˪b ֨V/AzuMQ_7uWE يGvKJ/߿˸u Bŝmmb/ '-aN;PAC*ꭎ\B8fFt&3+8v\2#m/lXK^?pzlLD"G9Lq- [ƤpZ: q8AǸ1 1pW@ʞ,̯'֜8.d}AwyY׍^%CVvslG )ۙ-N Lw}}A,ch0sнVxR8sɎ9iq(E!8 QHR8{kLX /ꕽش%x欴k/<Q8ppX0u ­RmcT` 9a3 /BBXU`\Pn@*Nz[/g)bIi0'r49XLrB3ǔXBE#˲by2 I۾>UXb3aA6@R&pi D "9 JQA|\9yyn2M~D_kk'[ w{w1ry?.frYrh^|:-鴗;k3|%gS4s,hgguqgه2g1._6'-#T=BwhH8,M+5b+t@؀Iad~sVk/l3b *T=|Kc*ts4 :j_}I6d0T} l\C eKg7hzo;sir&GkrQgekj bY YF0l9JM[MRV =kbԤ^:JŰТQ-9Y݅$ (1 0 \>8㪍FL#!X:Eʢǻ0Wg_} ̠:Au"0HPMq4hUH +5]'Xaǣs鬑kekՋFCQ,+)1(0тHрGS ݲ58 Sk\>?~ꐏ}ܓ9Ȑ)h'yGpцL*@޿8% 3m`JI69dgJI|% XA#Vݍ^,㖳P =AZ3E4CRa!Q"!Fx%a8"n xu-/'`W6ۅ٢숞7ɗfI/ݓݛmjۮG])UՅvCw m8/^,W^Q5& i`F|aJiڇCw44,h1-&Z 04)6xY\ T[%p\=rVd:Yecs(ֶ%\/2G/gzfu AXLX9VQ(VVX#V ,EѮi "hL&PfTetaKIwXTCmg;o +xq_:H.f+0vԾۮ-cY0PHLQ:9 #\LH@˙zȈR> ZU,q2t* ]ZYy2]E5Е`2lGZ$CW1om_t2u(J2)KhWDUD;o JYOWHWZj^N1bj<*r BgGJL9]+7wt.޸\p46ykCX3((dQoez=~"k L7,'pa}pBeM&JKkQa*0Ԗ~qDPMq:}B==k|Goxg.oC׼cwse&LqY3tnЗXLjAoR8b[leGb:l䋚̰F^ALs?_dWY3D1O~|/?2Jf_e/+喜?TF o,7c h$sY᯳3 UO};mv+Ѹtp\yk:cB`6jy;˫BD٦~j+)06F t_fD̠F:!F%5q5kRсGqekJ:E?j^N$;/g̳yj2DYY%Q;U|GMۡJ5y%I$MPqIV㦳WazM[y:"ih)T ,WX=!169X&=tn;?ۧjNWS~*gB.\WSM^ђ1iK n*|gg`߅jRCNck5NLӹ8EXr)e'"JEO'xIsuJtU+@7:NW%%=]=" !7{71(o7%,r_d/j U Or2/8@t?ϦyUCoap!qA#gڋ=}x,?2MC+qhz?Ju4Mz>xcޏ꿞%/rسy;r9ƆLtTuFđ!Blx\]a>/#'/\*j ϻȲa Db ahaZ# Bd-bP."b>Ue4nzД9 G5cdmc4PCFiُCFD\ 6P:!XAS#׮N& k]EЕNjJ:}:tE==C\ $2C򂲈3,(v^ߣݚy(){{~(ۜAӴCk%nGWXki2t ]ELv"Jz:A͠ E!u()!)O .CUD+T*T+4!5Nn=H*tQJ ҕ\XH ]\Ѯ"Zy(yOWHWKS]EZ&CWWd*=+E8!CW UDy Pj)ҕ&\ӔAMB FhW-뼫=쵫'DWг3DQ7LٱW9&~h텒Վ[ttz:xyL`dz4pՙH2"{Zˎٯ]w/Q~- *DUma XKec$ۤeA֤OTLj=FVzl~dj$$35W 0JZd\ .ש"ZZ޵pB%2!"ed*%8hOWB=]],%CW.M"Z.NW=] ]1F $p}u )uvPӡ+ΤB]'CW.IwѲλ#JN6N)Cva2YUya0I\?--`޼y9r`Rc M;_M/e8?㏊?KIgWX)?3; FBW|\Bf)LUA)ÿWeVE(ƹ۠We& vNԬ(T)Uv{*]lI^m|y7{nV62[wV&370#U>_ "UWOkQRfӲC}Ta]Q1ݲcvs!6kf;kg0@M 19rk)5\Ҟ3 7c7ov/KC:TpU,cO,V3A RP eR2\ }/ Q|16VyΦ_y^_^eOȇKRҸ^ +7j)RQ)\İZqAjuP[5a(J ]@.8OIAǬ ]9qwaSKVPg2Ä@ƜhK-!@K9 A !;20"6L0CB*C0Jz1chײhj4e9JYD(fᙅv^ ;0B BnM\}I.nt_:yn5^sth ׯ]BR틥9[·8Xk"u7ye_xD<,asG )fh-d+YjHyZ =g""Q@{Y=&>WgF=6xFYh)l<9hra\q։43bHƬb_)Ζg, 3iL 1D!8>gz݀F8L_HگWEiM6)Ҭkق \,kj7&>Q,#x ʬj6Q'K+cFf 31f $7R'@\c9-ɇ-[ݒ"v-SI#s+`##;$v3 lCQZʳxI&VCJ v{Зɍ*=Ї GM`1l~Ѽ|k>$Ї4aX)Y=7k0sK`k "J @YktH! ˰(g٥W;3fuDˊBi>)dS. bEn1x-{vL5GbGh$(Q*TGg<*t%~Yɠ-]^m {AhcnMshuXLĝd] Dqk a0ygXHnvg%aH QD&f̽` G$ec6 >k5bPѮ!Zjc޽E%VTfqsK'&n7~ \" dlP& $ BH$ټ$X(% jOHbPk*3} 5 ^'U~%ٕ=[Pm'fH57]iڑ?@e ¾7r0K,<PFdBEs <ɆI3qsيSt>S ֥̐MbXtLGɓ]@NrDEE ]!ʃk IԆ"92ke(TL1ZdgxILPc $kut<oĝ 1)Jrj@#;3PFFbޱ"xa;PMuh 1HeO7۫ ;w.!Oa TM4`U>Yb-=!DC`}XAĖ,+& 8'O5Ҙ j`EWۖc)h BV uPt ˈ)+>$MNA\lA`X?6Jo$L(DdQ:׀7~6(I@B4# 5\ P?AjDP (Fw`Qy KiAXvXY%3 R]9rj-' ketўEw4DT52kETJӴ,޺ཚQED?̄h; m>y+Y:1}t\B6b|^yM_t>ߎ_—bM{}~u997HeRXN 'Ov 66䑊h0*ŴHu4'ZKVP5KFCo bP. zfhp}EeFnväDy 8r{ɵ#jF =|*EtNJdVcLvpV$rEfJyH o٬'0 `|?y!K}ؒsP' 7B0#/țbCw F-TC᜔E 5FHOubDguK[s ]u8f319}᠍} 7jhR;kѬU5}V(y9#q=m!1@ Wjk>'е%=E_v *>]NPx Gs551/:z7%àef@8/dvO+4%鰂U2'EkO^O(T .Ir[1pQil8,?ol5V׏6hw_ߴw7O0 B^q)01._q hK@0%~ d?H [Ǖ0O4](o& bldIn[p;``c2y?iPj t}H6o,@3K_n(잝KLӇP#i>YiKfL^] lZJІ+A ҕƯpt+h-tӕY-rtVCWJpՕ$UWHWl,5]0{u%hQ)?5+f^q5kW@bPPtutM6WDWlzԕi-t%hvz:] ʬJ;Z]0Y^ ] opBt%h#̪NM۰~5t%hJPFtȡ/|!9pp0wCWa(3]*(]}hwۭgECY}b7Nv95}s].{e7\Q/][ܮ7%&(>w||tyu}sޮ^yWkk[l9޼}s1{JȺgo.㗞];1gPC~7Yoq_6N|OWo՝ü(?(,r?ٶZ %o6GmKYNRro뫋#6q%8oOOjX+7 d}=nb@^n>i%K: WO:9'i6>p2{sDI#ӑ]bdSG 9y;,-5kj 5?4ʏ!؏umH!SƼ͵ƭ1r0kr̗'@]^dYZϽ-:A.-҂=+WCW[ ] ZNW ҕWu[} ah9;] &dӡ+)" .EWZJR>vNkZ`"a5kW@JoN&[]p4f5t%p] ] Z:bPP>N"[Ck{j pz;zu%(I "]%*yn5t%pjAM;] JR)U-1-GW3(py5.A~(bCWC_`21g^guH7 agВ{5Pґ7tTҡ!VDWp p7#] Zoe6JW'HWlE"rPZ ]M+u_JP&V:A]\]ѯ+yJN(|O<pVCW(6}JtEɘ)Vt芓&jJ>:B;b~Et%7=кtt%()(] ]EvMpXϝAVv5JP:S`7A٭gߕYAAu*mVJWi=nЕGg(յ?]G})n~|cȽpv}hwJ~A3.jX G׮'El~ 9y択oaf)l6m`i$/n|=-K$3{KlIV%U.Qa<+g33ǧZ_C즫zpOOhgCٶim =]=ư>)\p}Așjv')R4* M#c54 (5ii Ι]`AXg =VG=z:i]`KEg rBRf=] ] QRLµ+th9kQ ҕ5o3tp ]!ZNWRwutTFvQW'QBk_%=] ]iiDRWWuwhm}WRQ ҕQL.bq#\!BWV~eQz&\}GP%kW#GIhkmrp;*h.?oHo_!Նd͘羥kԻV=gɏefځ}AJ P"q))^:&E9n %6h>i eG6*b"AlTSsU}J0ogg6 O<+qr]~7_0:+nÍela,R1q23H?dqHl0X6?܎F0)!ga5,R&r & 栝)LVպBj 1 24EfI-$gJ:2[S)R JȶeG` 'n{*C=ɽA7nr*Զ}nIX P'f4Њ'ޯ欴)lJ $fI4$A/yZPr,).Y)%dj]_ٷMn_M{dДI_̚]lз;H JtbSMM%dκ"! ,LR[Fފ`ż}Z awk4=:d K Y{&<<BG;ɑ!dH MS,2ڨFRRƹt["RzYPvBFgZis18OkD`bۤ%Lz@k+aeXjQe(Sg~SUr!J,/(pe؋ⷊOsX|HSeh "6wo MVB2 A2-<# 1O<-ub j9~0d<:P"K:3NO&$ԫyhHHiNF5R`Y̩1#?T*KDM$*\&aquً&ޅf2}+9x3[~wc}cW>_*zK2K@ YG^:"m2`DmDtA/)u }yэ{bĐ5r#T@H֊e$ EȠ(ׯ6V; ;[|n J>\>=TJY,nA^gē ۬$ OIĢ?DUʂĬ:ˠ+9: #lOf3Wg 9KO%iq1G^:hD|$ ̓Az zDGfzrF$43O7%WJ#RCx%:[!Qܨw~`][ b)PDQӖݕ $_dCz H8 4i\(F$S(DТ"šHsYdt<Q(i>o;#Ui MPz\mR| r }5kGkB9sr cs,Slg2P0V:.AEo3~I:ޝ[ރÀj3kDžn sCl܄yQ,2^Eg Yha~YPnu߂KyK{ԋyxsi#GRV'dO\+ZUj?B}~_S8E'|W+9J8cϖa.ڧ~~(&nUT8gt4h#fbpr BH"O(DvIр)X97)K.H`ڦH8gRR5c5rv] TӅ8c_]h*B=g/g*D1)ȸ#a{4.b^yƟ_ޞ\c+oX&61q"b#`(4Ag`*ji93@dXt֍ŕ e(ƞOEmJC`T%ɶcP39bq}AӈZlrL̾hjm= MPftd)mb ,Y5.)Ύe@Z4M$lA rEVthI$dMh8|HY'ɨ&z|e}X5ꗥ.bFjD[Y#A#q+fQЎ3t ,&:Dƍ!@\Ϝђ$32)*6F$wX.U"=R@ n%BdI e\_Y#V#gF=EzՁtYįWZ\^Ԭ^A/zqNj,)O.5w^@3t2dD%f xVz9b_a5>{}2nGn1WZKܾC:яqя.O&E*>Z5"r(E㽖^} aD]dy-Z9^0i_X%fmޫƊ$= <ى"3s6Cc0K}嗅m.-]/vݻH2am/}#"` J|7n_Z7֛Ltj*8l^ͶCjYJ/{wiy|x{t;r3?\_^not9;E[:^Ԟ3||kސYxŚ/6/iVOj}W&{k +\;m(䆭9L+P)j{2B%!LYr' JlkNƄ?[Ii U.œn0oTFQ*,}O|>E;O§Ȭ (OLD*6JIz}\gi%Tս:.q(z<<$Й̃:yѫS Wԏ`D̵kCF^?qk:6κǛmT\[T>e?}9(U7[ePM ExBŐ |wMRy,wc9ARL&aѣ*J䄐udn/PejTn>*<9x-z7|UY"%dI7`RK8q_f0o9;zJ[5[]&Ea kQYZ NrE ܩ kq^.Np >-"݄@;#F'c1;ZKb@&!͕D1Pw?`0*es(w$P"1" P Hui;5 r.B9rJ0|tJ<̥ACG6 ηjS\3dzb^^fw=򳔂/ (%i?շǝoʩq::Do G ?Ni5Y9AʔQ)h<ͽ#YWNgZ2C JU+vtb\#?E8|48s,-+]%{YwϴnaQ[C ǥOFVNѕoHۢ}m^^͛;⣆s|zwjEC.<5wR9]7Od `TPޜGŵF.Ο>x{3;]|n=An/$ONk+ڮ9GF/7(;bϊ4t#]t #9:JBpr'Ze^.nxir=96dר]s-F/>.e }vyK7S]QQ.nOSņN ƓQ_;ч_~? />[qD+0X^u7 x\-lkJK+/7N_XҘ|^^l}OX㤭Gib1⢋eт/CKP; XY/`q])wqEjUb#=xS \rݾhG_e#(:|6R(wl,bRLJQ#wB&Q[午ΆriZGCsk&-3Em  &:&$JK/WSy.0T9ә7jĞ;ϧkP$bߓG,;#vLD< VZګ mq#13kqBg, h0Lؖ eW!i3\]^ }s}';6g+9, J˗dBKRvfn+{գދꋻɣ`ѯ/' cgs͓̮,9wl?wC+zsY=&:2AYǭxwurzĹu]5Tcr~tKU,yN>!xs󋧴4-lp}hr/}$3W?Ok؝6~߆Mo?m٢1ok3Vwt4X1]xr~nn0o.WJL?ۿEL{;-%}pup>b8zuٳnVι*W]ˬOYr]#f7hՈ,9 ۬-<)Ϟ[ |J5C.\q|YプTbx;羴aia,k0Kbp5 j#jt=ĕ/pvw@f1r[ Z掫2N#U bp5z^ 8{\Ae >jq%^5\ a9kWK&,WPk WC\yo=pj!sPe*^ OjȵYjjePt*-WAYRku)jeP怫=ոKpBYjrbWCzp8dt#m8veLҵE~vr*o˹rw۩e?w\ z(Wc%,`!/WCmj\_'﮶Jn4yqZLJnRfo+>ChW";"[;=e(5]؃g|~Ƅ;d=l?l1ј`ùlA;GqW[5yteu=盳6GF+\@X@zw]j Ōf+URC'MOUOмzcp6-}b=^;8;=]꧓G]qvvT~S8:Y'D~Av ^y/WC#@[Բ5a%NJ%#O^|e3~s!A$Ç,G[2eK|6óD Z}mzcd)ZC s_*Z{e) je\ JsPW{+gL5\A0rՐf)jc;>⊭$\ _a;Ր\ q ͅW{+1%]%c!.j玫R倫=ĕwvo#ؙ] ..WCҡ2W%-Cp\>j3w\ p lTt!C`rn9\Y`u l2'"S \A0;Z \3C~} 3_C// 8m =>sS\m2lWrգޫq7?xD|,|Kf6&7??$cX!X[ z+t*V0E0=.7<Wrp5nXZoxJ:ʽzKlPOl}w^6q1trz:Z ntU^x_zuAPMvPY?}Qw3u f}z˿|33Om#1owk4"..,ջ= fݴ!βa@O?+rrħ|rp6/7hJoħ&>{̈́|G1S.7>nQzP0R_o񉬺yǝxQrF}U V;218o|D|?5_gg0?} 6CAC2-}~3*\@Al{VWYAx_._'+8ǖ,Jru{o䃫IL>(;OK!/W!k}'kF!>&~prj @]N퉊1[Y٪y> %#n珫 G}~Y)3LY ]H8gR5+V4\mv+qWS}*c&2Dž)wK0cn#Ū`QF{I6uI3 Z;c c]RkPLȁbkr-H̍,QvrR9EdIbtm 7o޽EJybr]$ '%M9J#N͘#$B!Sm$T/n'D3>h̪BjZmM%"ΙTRU.ӽzT ^K1쒇%tnE"NТ8:EOkk1]€bҔڈRF٥XsՐR"jkcdDH,!8dڬD%sUMIO4/W l@}t zNk,d0sSr f֬Qd|b :`=KE ȎўZKԑ#ӘI|[8-VM >2HK! v.*68:j7Z:wmqq8i8QN[<ĪqzA[<;*akBhcnu/yJ,&ڲqkKbl{EKZ(@%oRERmzdZ{fۇ9TmAPh5 I[E\N=RƱ1RlG?[y.]3!\-"JM (cȆeBC4~* XG!*(5 7Cpߕ MKfɤXr2:zbNs~g ++0`Q Vv)HX "".Z33KPFtQl9 N0t \ EnđPI!0 zM`ʹXהA6ud&;yG* AC5,Q"HqbQ wڳlRl ejQu|F*V5a]2n1hN]LI1Jil32-hƧm<9™WGH T&- pPB9K YY&1 hCSEU+czhMA U=aP.ZݍM/fĥ&1QcSSt"B!&D̿( 00a r]7B^q~P n\Awz|˺ !ZI|txK0up4Dݦtd)YaJcXF SQdtZ(/ީXq=G#Kp03N ImUFjhI>OU+D˓U~NWӴ'goۆ~T&IsT`EMXMѐ|NmFJaväD{ آNj"* F m.ENJd`L:XF L)cٖг pzψT}zAc_T0_;"X$O\k'F,,\c~A"oЂ(V3Z8p ,*1"D:΍uwm#Y+Ncr/t/3t'Ǎ/XYr=s(Y91md'A"XUO._e t }]` ˠ1#d EQ*Gς91 VFznݑ>0:]8+"y\+3`y0LTB'3ᄇFrvUe%8@jt5B>T%ݹ AXki -_qaA;XY2# J0Y3`>bbJ7s sdh8t)`$%eK\BM*o.HZ٬BJӥΣD`Ж]pw̙J&qŞvI !m sO07NmBM]2/bn%l:Ō!?'"㚔a.G: _'zi1N^ϗ+8jusaG 򱇠7L+W/8[Veta1.FqQ3?nnS7k#eۮtVE6@VIø/*B+#sPo >Vׇ*0̳@ :m4N q-=wZxhN O DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@HBϰ@(Ed@Kr d=u"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r}N ᵈ  r dpN P)hMA:9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@u9+u͚G;00"'Zh@ *!'@i2 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@"'9 DN r@z)ZoobL&\nbn 8#2.Z1.gYȸjq UGƥ!nUK@c>\gpju}~xI 4QW ثxprM4BNWKqeUBE+W(WXpjM׳E^++>"\`x4B Q=P WNho:(P >`QR~0* +/j0W^r},RX'^Jmg՛qF0fuVr⪕ZY7j1׳DWp7멠Fn[C6oɵ'smVzm(؍To_HWAoU4׹O+J6G2oLR-?vKyPAJ2VpNd:x|*4¨Pq\WgfTW<5*:O}۔1tXNxI$}]tWhb2OUM곲JeYUB)+ S Wh@aS xEN-i ; @g2\ڧNz\Z .WCNn xb@ A퓿B}#\= G+l\4BĂ+P;Pe#p5\YuLOQ hP}t*=MW߯n"v2'(WXpj;Pj[" *\\ǢɻBWRշ+{Ϫ#f.RiejSl{@`ɔj-۩Qʆ-(mzne""\`6\Z[کp5@\ \'7JXxq8*"\`Pf0j9w}l2+͕""\`♻B" jw\JKsWCĕV `U<\Z+Ti%jz hD-~/gg649PWs72nԍb*+ S'G_&?YP;OTz)3'G_ƪ \ڮTz+F%"VG++],BZWHqemK\`Otr]4sWְ?Ep5D\Y块 ʘW(Gjeq*!\ WN bz2}DOk]Z{\J WNebJhpr%WV>Ujfp^U/وi/uSۮVo'y)ڨ5nک_sWW} Wz+W]KV\rZ[]mm$e1v%pOjW.oX,}`?~`QW?lN=kçi"Em3ղl8^˚mj6ZqިZ@R2juPU40a <"\`E4B"PTTq*P%\ Wx!dDBZF+kx,BVWiq`ݯRg0rW~;wq*{:ypv.+쵏W(F+m cBp5D\ET g}pj;PqeF+xs Ul=)ʻgyv8\HD}?7g>@qΠO+{kϛY {e]o`-jXͷ|~0N ]'onz-kwbۗ{a뫴>cv}t|N0N!mdi\_Ыԃrݿ]|K}l@fXg]c` Ζ!-8O/Ԉp-O38<ˤsL3^<@ +rK\)B39eY]yj1~}udmhًK[[Mf%t^՛gto9 Z1_Od:еL>YЈ^m*$y4\䚂p "VnfCC fY5z.-qmp3?オf|Ƞ?mpW^W{n_̧wfs< $nPxy|sQ1U >Vc80W/8[Vela1.qQsSW m-a\JX:J1Yg3Tdo C_dsalcmCGZ7*xbPi%۩cƥ*d2ͥRÙuIc-㩇~ѭ` JdY_,&cLLf2UBUZ3k*SB*lf`s&\ayenrU4EY**V& >҂4mLd&ZTFqgӱ[՞սD~y+`}oif(u0fc 2.76w$YY༙w>߷.YC=>+XYiqɋ r%Tɜ\"kS.ZTe]`suyy۷wW4)}s/w1@_܈bo oķ4h|#r<+83EJ3+0a.t泴(N:uATj]t(f(b]h5 Pwws}WWWlH+-9!rEesr K^:]Bc*/Koʢ,ә\HB,:R( syn៰2Y:F<Lj@-ʇGpC_R|xܑ׻ջag0g9=3! 6H )98`,;Ikyul)=qj]jUXz#xFVv6޳l3^rGv3ǽx+~}q+| f= >΄ʽwD`#Y FtVyACj2%šl^lꁃM=8ư:nv{{ԅK .IiQ"#à]KAXM23sd3L!-=IzXI@EJH$c$fK-Ø} q~w ˀ}-#zWޡOB>_ΟoK&cΘba@.A IۢWT'g0!)#\#T]^F( ɻ)QB&9K:J/e2QA@ǽ&{S8zӳW~\0OMvvU?wӬ|{$ϳjO$/QJE',  ,3o_t\( )]*6T+RQshc% D5ԤY(&I=OSzLU]#=zXS IJ'd>j(QDf૛}ڶ_'.>}MW'o4GSTTs|ziz>\|Xz Iq$6 Pa^A7ArsN;XqK6zҭ̦\S5JbT̶8u[~Jqs|}f}SFmƬoӺ-ka2}|Shv{lO5y헭1ɟ/qy_{=푉yS4^Oⳤ'okA:dÛk2_&#.gy5eK鞪={AqzB FVDd}lRL7k?z8񷾬?l, (~]u?{82,,x}`Z\X<ø )$ZSͭgaő9w\n`~wq 7Ly;֣uG&14]XJMb9Eηˁm)~XwچQ|Eb̜>y:pT=v8#HKkR謏*hctX🬌KRb&جBpJ[_iai(_ ,#~Fb +3OLdF68# FN(5-rsE/+^W50U =R e7ӍP8-٣G:'Px!.zzqi=G3bs3p7osZOٔ+y'?2ql{>u%Ts7CON՞M_D^1S_{wrFn2܃97_U/W7rj]j@ !!خ$;hKgtj{G~!'vp$Da$]8&fH Vsw+|(=7}^}T#LmSa `݀^ ?+X.ku)j:Wq|-˫& cyruZkX{XeR^Žk93D0$;[D !jdٗa.ޖv滦4yPUڽ_p%t쀧Z?,UG~Əm<6K-K*\~0:lN1s =I^:\.9!bCvLEI40Jt ޭhS)!RΪ+9eFonl?$WV;K~Y?=kϢUT,{1DRG H2ғuZ{BiMBF$v !!2wM)KQ[{/L8QxVeocyԄ;}9 ݪ00+R*u9^,/;!~K}9g˧eŁLzԧD/\hFBB.{ED.@AڽgW9KJ \WCbGm\ !2'Ȧ4D2K!@N8IP-d.J$ɧl4Z& ^R xⱚ 6[/tڇPy]m4[h [LRlĞ%L[xdH6K1أ4,)϶bt/4ؐl*3,aEEhM1q"2aH!$^)66 +K:jMB{}hp P7_u}Bg=ȅY{uO E[.D;c![K1;O6 @ϕe-Cʤclɚ+,* >dmTHj>PDm8!}9Dr(sGs! )i^O{ng>~rtzuf˟S^xHI{:2Z:eb1-pK*+$cCwAÖ (;^h3I-UBWĥr=M{Xd[m>kHZT!D? #lra&֐Vݣ ^}5&w;i}]=#ઃ hL'~HWn{n:4)6Ye[?R|q?즇פ\|ɔyz Gz۹ަx7ފT[?mw ?Bz{ }նEoNu\&3Q`U'FHMqSpyyySL.BePPRJ藶VmL:ĒYu2cI D ʁZ*o&C/tH?oU] Pk"8B>Xi/\;oU%WwѽCW$3N>wh%.e#8r\Y;V Y\`[/Y%)R+=@=訔ehHd ߶Xe*3ڃb5>ZMNE P!UT0}IF%d+M#)(-B*柭?Ud#qJ[=)>_h%qC-WwKϰbCh;LOm;eLvkM 2, l#19",*['$V)E) Y Q~>]OIPXg] \IiM$vR=c3qN8 44\_}}E}N -\]<Eg& !%dL*,Cg/LI ۼ)Z0)h6W{MC%{1+Vmj` N숹 Tj˷ލh%qiޢCʹc(^F="CE@5c!edͅj-AY(.۶& .1dPfIǾ&j!#(~!b&EbR8oڦW8L?GtG=(V"UNF:yYA(gTrl H2 O/gk N쌢g0cbŧLZ&֝MZs#~.U#ŦbLhEcG8CR ʓq2t9Y@T5$,EM/Q9=Kšf1h<{ְ'zJ_(sӭo7ij^0&u[$BPweSb]kK4:Qc"䅺'BYOR~eѺOUmW aN4l]Gm! 5EbY0w7oy~ݖ X}kWW|XkRR;28A2K5޻S>L TvԾm6t4Zu`6D-SdᴲlHZmCiMI3'32ܚs܈u~>טulaJtIBُZj]G$r%q=+=Oɳ(_c} ?MVJ%>_\380IH{= ޹>9sir'?cdh{ \,[>viq|dJjyi|ĥ,P>Q1Hξ}Ә,pej&RL1A(z ? Q0@hEAEP`fݓV47x&7ףYeY.\]=b$#MSS4o{.+_R"SHr5Wզ+:R")9IEd]%Y%OxLxN+z 0B $;Ǧy8 Uhf;B%-2^Gg:HΩAJZbXέ3H&'!klz!N]_1,]LM40p)f.h* ufn.1| EɵS AO) 6xqK G1rՒF89o zseDq׃M4?+p*0=JpԬJm',8H^~J~ VN4%Kex#|`Ae|F=K.YrްV`$1"aHozgRP,ko68.ז@)(Ű~(XR.*\l^ކt4QP | &_ڙSS z^.?m98KBJ Z.rS=En<ÿO RZJs#L洸hnO'vQ\y[_8_m*f[<6_U  n^v9RTż.~B(T/Bm#q:G CڇQuMb9 0vI5Dl=p<49>9nu9ɶQ暚GȰ40}rҫ/f">bWS}^*&j^6˅N@8:}}?>==|}|:=Cq~ ̃KU@ ;3t+a\ғŗʿΚ .)Jô'F|ᗏ@S>n6H|\Cț"] -0F]zAr˸_W׍}-B9IisJߖ:|O:ōeml$ =cɦ;P$!JV'.! e #:GF;.e鱝 KRm^;#*Q$a`+NK8$2 TWE!FN'g:0MtW4q\.r=dYe*G>I!L<@zg~ۅxF.LJP{a⢪79s%8]/_̂}=1&RܭT'5b+t@؀Iad^ )f, TcmU7l҆ܶܦWW l7ZP@a<U1ҾĿ6+Ok9^ۖYsբ^$طͳVήirtwM.錗lЅϣˏԴAm(eB|c5ҬR=J^ aUV1M+{ƓABGT r; ui%)!"7ElOuޕeNt_ TS4! E@zMb0e\ T[%LZW ѻ{SӮ'[.EކK;_g눃ǏW3m0 $+"0'JH",Z#V ,E춚ӘG0ČAc,4deF͝QFG F^T3xR*:uGY?O&vl맮yk{6EoݮȖX >jjugS3sՙUlku \'C ?#qVG\%pXow xK '(E.-3+$FƾƹȞCW H\ 7*&u!R-.9%5/ :SNvդiuJƌqVed$1j1h8ooI ^kQ.nFi(߽~F(2|>:|4FHi\ZCa1X&qMxjbb#BtTߋȒ^(P禈Ol}r& i=:nfsYr^< S)ի!`}+9|S#Ly<VPwwZ!IIΊY9+:gEsrVtΊ̒sVtΊY9+:gE5JhM%Y9'gEsVtΊ*=T*0s*7}XkRNG dWV_Sh5 )] V-s/S7ξFS L`X)0L)3XWvB1bL<iI FDpCGx%0l阉;v&^i][ `:e[x[O_twYOuZ6Ksݒm']tם¯wNu20Jjyi|ĥ,P$G>$WYq$G{Y#"H ? Qø@PhEAEP`$gt{bzة5v{)hl0ҰqSBKO|ISsHBy[D|ny#jCW'T@̺S8nvC y::]ArN Rrn@2? X}+b$cQ0ЅN4sAS\x3#uHw~I$.J!x>58D\\R>+=ps+!O5cDq׃M4?+?K%؁`Ozjou%i[_/8H iûb`*+1 ?NZ9AҔhO ^ G~PN+!43KuX+0C0E7E3)(lv7Hk˿Jt]tbX? H ˥CRR%+mNkhuv((T&G\pnb| }Sk15益}:8W%NJ ҁPt |鞢L#yHp~Vz4Es{rEgsm}b>X|qmFR|Vq4<4z HQM:oQ^2Fu$Ʒ4n_:&PMjLXL}]"zs8ojA7dۨmse mta#i`ܥW_B1E}ŤS}^*&j^6˅N@8:}}?>==|}|:=Cq~ ̃Km p~olE0QKIa_ gM2='Z4L[qb~0n g[ --+HWCxˡb; Q^..2Wuc_ENl%G%޾N'PߕsDѭ6~2'5J4B(U[0.q2(MgVpdFmv6,KYsϷ}t8y-J$(rjaoX 0&1 [,ҹ 1pLt:9әt_l;}˲C]Ov~|S(*%:3|yVSoϺH͗GtZ3C;q()bk小3!u ^,D"$Z^q Y0<`"\DpJH;9#5X!ǁ)Dd.F\>ի^UYo.Wp5O3`xղ5j\-r1sȂ'0lQp2,* -N $Ō !HsJӜb:fӍ>Vѓ%g +H8 \'."E)-['0f"KAXI%OW;FϮV"ٻ6$W~> 4fwggэi̾t0"/cTmy0}#HȤTrC*&""xf^{T(&ƲSL)ɕ→rE&+s7V'+d2j0tI}/,Fka][dPQ8d0GҖ'R %p ZոPH( AЂ"š@SIh<(q9 P9ǫc2v`e&Rf h4ʩ[mS@%GՙGiF+7iq=]Q ѿ:+WinyهiSFi}M6h:f}rPM,{-hTZS=b1IxMX.J#$86rDZהu!p-um(lZZN9\e)XHNe2tPz?>_ab5$7+Q9vrFwqQ܌io؛iX\/ ϔwv߮|z!0Sͩ4z`zC]x4Bok?'WI [Bڇ=7s񲎝^,gYpoSo-0rq] ]drj҈\84.";{Ht01LsX\lvHS1}ȼyҺ#sЛ͕wvLu83Mw,aՙIhMj㑽bϜcrL+7v,,P;j?{߅ DŽÄ1oAM̢"qTooXOh)FY13%<gXo)SGq*L 4,G8'0=1%TY >|ѳ E:+pP<.\PX-8U H:r]D17:,OVR.*POIX͙H"i)e$LR&Ae0sbF+FC8- z;ȝr?~.q?Sgt΂'!ISLb0A'wZPR,wb)U>kݥhűs:~^f Ů7`<}&}@}TP!  BVBSYuť&&34I;&&Ȁ@G. h#B M$(X8 .%eKB $It&AڝJQk ͜iT<ƈُQCWk+i!v{/\t9:PĤ+5._/,*[,ѳ0h [U1cx%ʁcIS8ay\<\<\<:P9UJ+a@<1im"cN[&*jȋB\UW}qj#|6Nlk]6oy&Õ|浊&ucgQsW]8cZTܛPS -r-lCs9dn&YnGRW=K 3"4 QѦFq`L"A48#.%!ϮcL418A THY"!X(OBr`9XvL9qN#(= \,o#pjA%\Z0G#>%>D/gJ9+FAR$V9PQ&3,I >fЌKn SN51#gy"~v)m_ ɱلp686k9lm"O.#mS-RTg:.:b(sKU+AX\de\ 'oʁ!2lନfTFsHM)Cs;C (*Қ95c9RLcu* ^V&#-A͇uP_Wkۅ_NPd28/_HBWT@1:ÒHd!K)Qی #r]C7< \lA`;8j+O8%f6脶161a*]_iD)rkl7/ZwEkAkvgfD85ZkB*%+DhSA&KQdzC!M٨$4BFdHZўG&DGQx\H2F5a1rڨbF,jD]X#A#q'юݡ 4j$J U 11*5JK4LB%P6^Jm(6)MyЂg\pjhI39h*~p.+#w|61_-g;}ukٕ5|63,Ȼ|UN¦ |oh#1ۦ蠇*nA,*zRr c%x"'J[LnAs]AգrD ؠ|LQ;GbtRftFI/<:~ 0Fz멉V% H @FzJ"gӬ}sduVg]x7X V7(xW'o`j *wus_1^}.-wx ͋2hL hKyo ~Ķ.}/|SnJkb7?aכK1K64<;?Psorq{%O?QKyOTsK2E8R2r1,e9w5rw7|ͷ- '^ȻP+AlCׁ?fh ?ﻇ^cz=TmL:LAOGM~gy3.l5β9'jLкjj؜t;݁^ عm9 ߕhMfAImkm+뜮>'xC`eʖ\-bOsSŮR-܁P+|}y9_ú<_?]&A) uyʅyq!BA?șU rᑚ@Grg nyy#nyiWh'P/%+OòlVJ >8DžSH)C1)#n%8Mb{`B >ެI^8;9"'qy&&Ly&8 /\TF )V zD3ݤ9'gjwEϱQl>m7v:NB!1QDžP$0 l$IyTRIRsn dQY'x]b,uR(ɽo!z\NGP#!>rvI_1r(8Y©׳߮(1v|^d,ԭn~[Ya{o#R.t7A5} Q7*,Z4%cnO^%<'N$8e8s&\SQB\&#ej|k7Kq. I:i I9~AI鏤$@hTE-@]&FiN˒w pkQa_9x UB!ԮKDZF#󸱁TGt1n^"hiÌNhIYWErU3FA$cZ$ga b Ǎ1&g~фky H{{YMGZFV6;A;y܁u.7 4NWR^.So `L%*FFĐCI-dSOB>Y Y:0R fbhRϔq)r.*ȝ$("(X.dD`,Pßom^dTQ698J7 *Ew={4OJ]r秹&OO6||q@Gτ8EjφpX&Gm#|Wx!muީɖ2EUѤJxb+'z's:%ȹb= {XD[2!?{ƑŸn/,v|aZ&$e[Y~3$E=HQP"1 {jU] F9Q2Z[ x # Is LJ6YoE׺XwVIXݪu)4O'}yj-7] >_.Tϴ'$+ݞzwM\>}ȺopMB#Ň`7yn=wWdHkN!\\onǶdZ*O@"-vӆ wBD0gd2 \J&bܱg"*WѺLD[.8c)VW#غWVW"W3yJ:yT9.u%Dv_zVi+p՛&~׻ʝr5_]A5lސPa`` %/7hܚh6 >;DktL gʷwo`xz/jvM0ї1<-hk݉/6\4_/m}y\dު+_E=16w^*^?V~72tURPwr>J0WR`0oaSw_MA_i~h727h4O?o  Vi!~Cr7$^ռt;?tEG|j5WToK@%hT̰ ub.d&:*ͺdR`>@JtWK2J1tR ])CW$3󞁎H&OȲ/+u2}!WS nøeOTeZu]嘮rLW9U*tc/p۴T0p]2Ct < UAT,PNDun\/嬇vË9y:rJH|c6Ƣ # @4QY"bz+XV*'uT,6fц,ZvַFZ蘜~+m|jJ9m#ǿ?:ȓ{|+A`R֡p8ԿL 'XY4PhS(sEV ba:.*q<]rwQ[LR`FN V[-  9U`h=VYXf12C(1jEf)9̉ީMe`9[*ۨ*;]!;ٷ; *+aV:irLCҚv_i1B )[#K9& .Vҿ/JE!!R0vU[<-x9Q.X~*&nZyN",qNhu"d\627d&DPjFmMN)a6e6ZeRvA$8W3)5ck<*8cW][օӅՅKʻͮ 2>b}t4Cσt5CDIBC:[QIÙ&f%#Ǣieb$ AHQԦD L&ێY8smFE;i<w֝&(3 *@阂'KVqEJOF YȄ 9%5 Y 5iRVE2߲>lamԏ>dOXhcWhZֈӈFŊY4!F5adEGՎ–5cG> j#ZdϋvJ?`vG\z5Pz?_נ,<i)5uI|: \esz^ϯǗsNbX Xʚs04Etc Gй:0|Z][#/EA<5K>]Yo5{G@&'\YS$aDVoZmHפאd &Q`S}V+f@*8#dCGL]oS4r׳;5\:;${ݺgYpeU8ўzYCt-K)jTY[ݞ~KFҦ#Ybl b[nHwWtw{4.2٬;+s~M`ۛ|^^lWhx\ܾ6v >r:f@;ҾˍZ'M:" Hڣ !I85e,bX.{%Ut[t: JޅD"ڃHQu[lQTif,]wfT[ h^~`j0UmwLS/ptx Nkh0i>`rȼ6GU.S1bԎs(ت*@ת }Oj&j<}9lCU9yyDKInCB_NK A\%eJ61M ܳ g;UfV19/mn7tץ{*!> n,Q葑2e'P 椕N`0)(%EoS|&9qv>r:ʹ.M3a^Ʉqҹ|_h0I5|UӔi_XsJ”|N[(-ŵjIsH/qUx;Ѳ|5[o:WvZx[Z̢!:۬,F"pn}ȲK8HSZ8'g\c,'$Z⑦30 Zd$xˮ٢zZG~YQGL煒>YKD0]O aĆГri9Vbb%qI6pLIڬLVC,(9KA[)MP )ݮ0TV(cGB0\CY[o uB!p&99Έl--㑔l 1ó.cfPhD 1ځLQ[S6+JT .,8*#2 6697H`q#KYaMqL>čN=)@ErU3F5NƎI ` a BK:Ha<LRXIU$H3mHʨhd|5W R C~!k_uVnpuqre_=f%'w釟JE G/8?}~:8,>PW3䢦L3IF_/!g}[$fXllGҋ}c:3o]fTv8m L{]~l>>@5_̯TY<`P}gv=NٜQ5?ׄ荚Ւ{oM-:76ɩ RǓ'˓E_g70~/'|C-UD%!!XWOWJG^BYu@?YN+S#bA.&!/`2 4(N%ڬ"C=Rvf˘GkΈ6gy)1'KC_0qxl]*r)e5h~˂7}R(Fa9$dc9xցΚA5Z\z+v8u>DžuO[4^Xg#Z%c1;4JIϬ XJcWaN+׌s9p$P"hc":JtRvii pQ.R-KK#}d"b$)#3NetJm]w5~D\?kt#7Yŕ9M'~|+b?a%9/Ͻ+?L7w;NJ)k^R 5e*bԴygܒ!PLp%gcvvi\3?09H~<Á[qU.pZ?cnLdCr` -p8+s-OzfsL?7H}GloǫmJs,nu=r1Wsgޕ6r$ٿRffhz5vNCS*$}"KbIy$+_^/#"##eE;'SRJm#LTfI-MT?Ͼ`t7o^q$+FOwg-LEC-C&jkb|sM7uՐjfuևbTSW/, 5N7k7ֺ2VtIs>ߏ.=nQm\W8 ǃbLy9abu0ttw?K?w\b./?/`SxU!=ٲcK5gd\v-ﳟ>)w`}],:~73K`u"COۏ"MU ͫmQAS6&\S .u7EާɂG' Y]Os h]hK4DdScF(zƅ2Y# z鹍 srgnmy^k&D"G9Lq- [ʤpZ:A!qᔈP< 1pL+?iLgwɵԜ3t5r2^Ui WH'x)74Os {axs y88\"[em U`$a$E)jZ"za9gJkƊR aCqS>$uiTai@5.n lc KA} ԡBH"ƑrJ9Q5;18.ޥ+'4yhs1#}cJqa,!BE#˲b9bo2蛌bWQHڸdA Mx *RvLD "9 BQ$RT̽s;ObklBqK!3Xڸan>(~` [F7(W砮3W>caνYGH(򹖜N)bx >CI\бbs>q-hHplICzP)l+g)OA7sYx- ΁뤊Tnf,mu3ۍ$t( eCQM'JiU:9ngc[}*jċ}sŽϫP7j|݀ʣ[cBΐ`Dk\kZRp;5W ZYc&Ȣ3Mb0@$*Y- 0Q^s}+kIk>V 6llŶsCPzVf+yӛS[=:q &‹pF+m#AEPԢ;bRfmONcF10@QAsglKIwXT!t<7Y/`Gis\|uֺ%#OݬSJ#zEYHTPz4GAiqۀ^"g M\!r -%& }W`rp/qҹn CALKJOsDP 8N&;#Q!X6Ġ]aԠ5n^R?rXynnB>|ϧ7 G ba)!0-ϝ s&)4Iyܥ⭤vjMy^sZΚ ?&Ůno{DZ7rE9 ^0E1"Dbmrϙ:gn|$ݘ\7Ӎ''w` `r$0EVSEqN`,yMW{8TG-7`>~&3x;{gv }F[u&1D3Yn׎@"!V<2Z#B{{F8 #\Mzb81DS-.AX8%,oZ3'y+8,  Q$/tpDV7ɛq-)=lfM |<K5Cm~ZPT*5ۼkxv)2]s!2BIj"wBHn5Ҝ&=4CWVyT0 )5SZFxTWt[LҚE_.uѢfn+4!8$a)P; EMkv4@( KMxZU/}NLĉF{ a iq4rCF6K3z{ߝ8.OʢO,KYVKr?.}4+#~#N&r'=Uz Kx6go|UOUV&вES>{WLƩb`XDϺܘ.F'Kds ] 7*J׽_Йtd45ġ0w`~ap>6$߯x8uK rQų2XH]˜]|o7ܗ_|^1gʙ)<({?Y>l0wA*ɮ?s"l+,T@F_=ן7&!LKtt{p ͻ䴸uVe~?|4mHFr@F50EG1fM40sCХCO:ǿnZ\htw@?12}\$_8I]|.䫿7Cw&GƔHL@>2J\u}Qj~P<2k|<(0 +5Tzzz/\x-i ̦H|y^UX>bӬeEO \}-3g=(f*9bc%Wh SP,?Ot{$ȧF 哖yбXGg@<-b2ɰzX:j#(L#5[%ޤD IK3ū4%|cscϴW̚H+HsU55,iۡ=ЪmodUrg9y1b~kq{"Fuy/i5(i-%- a =y yY!vCiBdR!D""!zb±!}VS9/uvKjhv6`6C5a'~}M6MgE=&G 1ɱΰHX%P4U_nLhX4uZN %e=M!MB%XUDW ЪctPb%B.IWXHW ]+J$BtPb#a!Jywlv  8ZNWjOWCW"TS3t( -k*=]#] 8&XU;CWVvJ(9 JRx8~q)Jp Jhe*Ժ3+ŐP]:vUB+T*TtutŌw`GWw '/ؖC^!&8?j7ԶS' ->tJ2@W^ [X1y qaF4Zzn[wpގ>/n9ƦxWW0AVCY"z41ג)E 0&0R3Wuƴh5m7-$D34-NaW]`}e;ЄJA{:C\();DWK#]\XW rNWRX{:b%e03t>z]ZJH*3+.`5JpOtZNW &=]!] ~MUEW*=*\z:BN uS -ôtPҞΑ$S ^uo;]%3' n2(W%֞·qC9wvخD?L(qoj9tŷztiKЂ޵d׿BӮǭE, H6_1D<s(X-rH]D>Uuv:!B!-!K'a|f>]8AӇ}Ci^Xli4ԥ"]Sh%s,t: (PLW.=p4t::Z+K򥙏~AGCWBWsݓ02]B:uDtw_q\㎅:Z_:]૤+t9CBWs]qJ+$7ƛM./ #J66o'?>b0~I-bh{]u>m8>~ ? 澛3_}@m7V=6/[޻&o6 m hBo#1 Rz!o)jX"X:dofhM"Amm/bgU޽co`] UJd~v xSdjOܓh&3Bz6f;¸?q+^*qaTT O/e _f5nycVbvs^ƫ~eM@WuZ|1L:ؓjTQke|z]E%iS E2i+5(mQSꍳ{[^j7҈_]g+wZN10Ptv^kjQf!l%D hUזMFFa4I YJUI%RL$}բ$PƬE@*)VMc"9QlO NɖFi Hqd?-4m.\m Z 9#̷5Q&tI%c̑0!j-/dDj"B>yj-X%JbIT0JpMVdl)I%ebjAӡggTCԚ1$kFH)Ldit鮎bDhi{,b ؔ LsSVEG2eQ}h9(+-h ᳾!:$%LE:݂6Ls{ MJ#TAtLa0!?YSXhdѱ]ˡHbdW{! Q"vYD'Y\~yCFc-[@!!X2hOBU |\&9A҉YUb9c$xNTejZ\ 2E%d|fo ~L>Aٍ %b'X5E_G8: Bяk:bAZqm@ki fL¦ VCIq VhrЗ Teʦ J"zԺuړ(.z1D v%d)1j0frYD .QgV JBuX%ZJEu)I6$2P4.CZ=c%P[0݊ NVh sV@k!O=4 o5h;FQڍJ"Z TbdKJ̃d0)OZF%"[M`sDHQ٠+.L-!kk]V0 ^gFW4r XaҸ1kQ Ki)~O~PV}ƕ]b-PP]-AEX|/H1x7&؆yo7WHƊHT`6h* \dk=$ y/AQ* e JSᷜ2EHcb2Vj^P 6=`, ל !q2F [Ah `5eP@dBE@%2Y,5%@x`;ja;‡ q b ʮLJ. :)7T 0XPVʤCB8ެAP{Sm*3!(Q@HseԸAj FY(h :+8)m6ݕM( ?H]Fcaܣ.oK$k.6g- yD߫S !1(Z%J̃ pBi5&Y* N |u5ft菅oVM8,dgzLx ܠۡbB\"B?b(8ih1FPTP]!D#Nȿ`">`FY'oLri?92^p %#7}9/m`-d^#x:p0G&@[0^J.G[K*d`1uLA-EMp`CEE ڃZRH&5*2NOjEJxOE# 0}IEHV/V>(2XhGW}`չQ,TGW?^ՀXEQI9¶,9 A";> Vm~5ߗwΣN*q>e[=Jݤ7%YtˢkѣP.)1H /Fs7D. ˅"8Bik ) ` 2{"J۟V8f4`Rꆠ%2yWXNH?(YE~]a6 3*I@̔HRd&Cd@@GƢ#"kTAWE2$cB8*gD(!h6DES6Pk.OtGHGYLg& ?sԀ,.e3hm@T 5i"*x/tNAPFmw+$a:H` }AieTLCbzihmGy Ϋٴ~ tsv5[=ޙDӟX4 B8Lqt3(ih(t!lL"w@((r2(XuTLk̪k>HrDs%JV ]"z31iFr2jʌТ E~  <%*` ArhSbHP.h72{_4Nr9Vt -LhH H**m@z |""(f!==` q^oMgc X@|oPDRe 4I蓫v#7(.# U@ S?CTo2zR%>7#"1ps\:&X&W (]IBX =4v*hc +~ҠA*Ei J?s&H b2D 9k듵@:E'gj:?1(JS>~5oj+Ƭb;ڠb  FMRU0htAkaF3^F\U?/$6O4%0`.䨽¬S{p:q~J*S\@ b~;pi0 ֙Qc6WFå>tH0IJ):&@j?7c#t$ 0ucL\HΛڻеϮhD;*:0B [!= R~!mqs-nZ,K7׆EȺڌS1N(} !;撳+4g>p"GLlJɇ)ozQ z ۗחze͗s헯|~ |zzyY7_x rÛkKG}[0;s ~㛽-cQd}Vmc,2v w2] |/d9_LgbW_~s=}8b7rɓ}Ѱ\IU|u>]ݾ[+d79Ç0Sz9gfRѻ4퟉ 3ۙSED >&;{lx Wju PJJÕs{6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ WlbpņpR WqǀKx W@K/ W@ip WVlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ W DpD+t,ʗn(]`k4\mlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ WJCd`{D+uG󄫎6oRɒ WpE Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\ Wlbpņ+6\jT~pޯ޸mη0\kYm 6D#T&u<0"ha/x0$ دhytQ޲3]EGDWNhõX誣 U?F#^#]GDWpGCW !UGK#޾Vtu`OH=/]V=ӽC_te[gnR&7i](Ou؈ٓɟo َٜpڝPS4œ}"!:_v\J9j"owAͧ>nĭy]g@&'x^&.9//;f' dqŦzssr}ꬮ>|Y|Z~8OS|/%|`z0& o};ZVW繓$y;Y}]\uNO?{ƑBvado KKv _%r(J=cK3=5տϬS"L er(׶Kb}mM&0Bbb2{)_0wl>Q8\ͤB8 b.010!^ {&q1j~ #9i^]ŲЎe`mƾ2'E""[xmz u2{̲%n+{`W%Z"ÜL!,alAM @{9T>}?!eB1Fx8P ׉&Ҝ96@%>1C. كjJ2Zȓ1 JA "ug "LY+OI\1MU'#\OE\!R` շ#8.9&$ГWH"0C 9&N\}Jhi|4]Y ˧Apxӛ?WbqrZ:]8況]jn#3ʓ Wۙ`Svk)FZ*ʕ0`cL;WŽjt;u:EEH޸pZTcڏ&^y&wD+w8˒")0SẋM,;ٷ2 ~Zo9𸩞OxNkn{ %RyЧY;j{!oM~5U bڢ@z{8/I:f1L.,SËsMHVEsD*y,'MQ, ϭ'RSEdTUy_.Aڜ}ar/p^ kMȻ@?~Z,^YM<j*)v<52P?>%~^ݤZ7if)z|~;D~v4[X_ j݄XUզ682.$$t h8fi1I|l|.f7M+8 O}Q*J"଱>3¡YZdo# EʡFqWpk<⬯`I]_y5O~z3/n+++Jg]UǫݷWn&SW;.=ߒ'Bh$u ~2$BM?,5l!D[SC]V*5Jֺm[RcmqCrջ;aeSo;_)(wm?Qim6M@44>0hUr]>ksl fj*-D 2w== >\Aop;BD;hzaK)>ӿ/kPf[GKzg 't!YMea, g4鴘C:R?$%!2sTF##$Gfc*zl Y]ź5ȥs.E,(lkVBHbgZis5rLj % x"}Cy@F^u \|!)uM@`s ll#&Ía5谮 4יq|2%u -7&:X+clE"Zq';'8W79ie4Ԇ^uW>cЀ@0dipVx鈴ĂaF 4f?lԱ5w / )Uv`sI֖Й;jH(T!4o\wVYGIi82W/UL[Rv@ͯ|(E>IN mVP$bz*eXbdePJKo7 #lO;g8\N4N|u*E4I$-"HKu^hpG@I>}4j:RR仪ۭ>vA"S(7ɟn$ ]jwsQ%e5uu%i{qR4"ڗڡrڑF u&}"I]pqzH4]65m{7ݮt R>MSJW'E/jBꏾrl؟IƩןΈp MY޷!zɮZQxR4ʹ ln9DEU`i >'nSL~D:0 ڶ޺Z[hۡ`;K 1(T7<0Ia/|hխgYvG^mv~#q&f\kiCGS5|œy]łDaZ=`6yw'Ttki\TOdBJS-9$#V3$Uzъz|d6iȩco@>wF>R`ZT)Pc"j;.Z0$JRG;׊C (O{JĐO5KfbAJlmskĞ1-£U'[@Q0ը.W(R؃@'OUNl./[0,R09; P P3.@#f" +%,بBJx5Ћюdv'炐vlvFjL0OEl&Yg\Fg}zePs5rqf/1r7<8BY6b,ެG }&ϗKhA2ƙ^հm_§Z4 .tT)AE/^XPYJu>'p"E;Y^J',,Ed7x{GkpV >>͍0}ѧf< Ȩ(G.8o~a?jŁB EI`ғrY"g۽g1{/;o.iӉݖh0tzsݨ 4am (ނgAUix7/ Gzg/O%JB _xpxz_ 6X 9" a.AL6[`i`Gc3Op{aOsٲ]j|t;1A11-H0#Т2FD gc"dt4Æ$ cƎhRНZ=j% mcPwz20"7N'e*:^("`UPHy9-Xv2 ) ɸ+!10d珵޹j'.e wIGXNZ'E׀-Ӊ2KkJښ2(ZΡ}c }4D I ?o-tm"AlTS :Ʒlh-阜IiCĆ>T{ .75&ܞI&WVnѭqs #lurmTH0 B2Wm[ʜ 9ZF -L2:%탒ȡ\wN^e][dv>|]RBKkRrNj QpFQk:g*EZWv[6XD$sU}q =}:}Jញy6c*OA.e<44s~00hHhiFor#mb{3D$;>VB2~t3ی,j<a[Dx[{!8am\?{FF~]7 3E-vv,Yv+i#-u,ɏ_Uv'Y .hi7dYJ)O0=emf*7HE=CJ0Σ4&!2TB Q$cl7bznl $Im*O5sN&< 5qn~]G.Yn֯M&6da?J ^7/=y"õ04ClW_;nwZuߺ$(PSZ,Md*-Lg?BLvYɺsa'Y`[YDQJ*,(nv 9ΈKI?n1QĀjb NP*D,N@mrpkS121lM-6BvϬ}25N?#5~ *(}9BX)) J!z^려ɡ6k8&gya7-֬.M\bD{c9(Rhһ9q OJcYfH $^iEE}%@Iu31<\7a;RQSɬ[/+Rp<࢓Lt T843zxyfbC<5/ ti%>vv|v?̸4M4ܖJҭ_@cj:E22A~P%}ʿaTһ?gpfgu:Ei!Uz?PAtLb RTBW7 eyrnT٫ 4-2OC*w9R>_Ucabv[|' >wr~+FW];$DZ@֖E/(4'B65Υn)kl%]?]wo.s4=MÁܛUxgwέ{?+yy7(ya2o(o~use[y]Md=BtVoo6-yL#ٙ(e0,mdk5ί<o[J4%R6BeiC"r F(x1}÷_\1DTKgbFH _T(\JfdP)DHғ(=L'OZicqrLleo<+%K\t"4 gDM@=(2%G_"J)x ڲ'vH\\1Vi-yBUdŠFHo@T,F!*ZC5uB:$X'᡹b`Xoyr#5ߘ[;&aM3Ycuza}QC? ;K/=̶#%??|Gߚ5%༘`OWpws5Go?.(|ƎSm- I G喪h.P[?8'_Gf4P( rF+)v:n`:Cmjo#k7r]ښs-kf}ϑ\xsїXSab\[B)e U8&~ ˅\p w߾.~q{yק~W`̢A׉ Eu{>X*[9k9?)~bL묘sMK}3nni~0;+.GՂ k 'x^Ыli-g@n1-$U+A=.!&jQpQ ygLPG̣ftT#ZuW!i_M?5uO_PEvƊgqTE{+6pPW\~h Ȳs7ϒ}L 賢|i1^م$,}tAj+hŤZJ{JT V J9͝yZctU UF?,l4`_xV>Vo`[?{ח bǓqxKN9z9v}o'+ OJCj@Ἠ$Sd\\|:oAsA4TǤ/ J|t'KzaP$t EőőnD7Dqi<H*s5QS$mY MSU? L"&Nhs在L%ʌb6,)ykjlM,wkՍٗ>o|<1J8aZ&ܼgyUg,[zSӘڌ&=21eq`j uS[RŘY6.Iq64Cl"+\A"DlAG'S7dzH CUM6wX&=pIlvոߍ5ErR\GBQA93TQz2b@^S^ -?~if/K3 u7>d8u/ܯl=?΄[\=.u xi'm~5|iŬ$CO#X]I0o$%poWƴExiW7YRٞ"_;h#5>-7+_Ս` &.Yע fɍ-30#Γ+w?}P-0M'ы}UIu MgI`AG// C_Sɖ؍ݱ^>2O٩>{ wu:.~z}%7[8(nxjz77Ko_ԤbH?٢W)sgړ/_V)M 6iÿu,}_Bҳ͝Tg{`}Wg7[''yKxHmSg =z3,Hzwxuh66\z=ik $چ{ɍ頶f)p1h6x+&d5QU}57< K[wzIΚds=}C 󍍕\)ж 9HysQ 9P;0Uv%oj*qy}/UmFdz0*f.M)2.(߆NQ{s_whJ/CXv{Ihkg{Oږɝk8f&ws6ׯּjD[3aZJ FƴG d+D(vP% S OAsߓ]}ZahZ;?YRB 0Ժ`D%<`#8:ivF BFѻѲՊt:ew.98_ ^j|>nj㐏L 2T<VzwB c9V>o>0k>Gpk.{XGJNԧ}:m3sUoyM8|>E@&/Rz"r'{1h)gbv?}\WrZ2=w+XJ㬞׮3/Iiá[jH. vRpnm^]6f=煉rOܕƆN_C_~i{ummv7^TwhQb:=~q1 iTP6'/ r|1*4\-/ۿzo]/c|fXgH7S-Kf0֬9J/Y1RL{g{cmOАCА={+Y۱<*,Y ,3*tF j`f)A6FZ˼k x$ )[X1c2b=6MVHKDf;68wk-de]Ugu$#^0 +8C"b0.X/̾TLj#BX<,@ IM QCvY$PNbXXH58jKm1;i\Jr6=6 eNBfƃΦuPV;Z|}g8KnT_N:^IvSeEᨘX%Z I46 c3ACo -} $jOIVdJF(& |%2>jK ^22x\a̙R181WɆ4c[,XBwČW"7'뀍V ՝4(p88Tgc\V uV9N%Fhb"qi*F !{D%$ؤcV0/2m.$חʽxK͈m#`b jgӎCAmѣv`&TNƸpd9l6CL{ãFQ =(BAUޢb i`-20Cv4H$k@>r ɌٌQe%`<D̦"̌Gz\$hL$pM&ϾC\Hڌ0Xo!̑YI%pZkxE渧0GvaTc]渭,{75DR'e HH"ioNQϙD}aAJ}D#$4D.!ưҮ I<5Z-1`ϓ(@v\켹횒K=}@ LôOdN5djzRM]7A%Z9= ChInדɻ֚\IG%J3Y]@K]z>]{oɑ*nGwW 0=&^\vX"@CĘ"cW=ç$R| EyuuwU܉@0K+fFғ/fE<5"  >NxTIvB=?*ǘzgpmGُeUvNtu &dt&TTDi< %0Ti"rn&N=,!-4;kpf }>[LgmsKYOT֚naEw+om6k.PzSVO %]N|Pɱ}M|y0W|or"c_ ZǟpO)dDsnXsɣ?XqOVemIvm>p&ָ2>\uP~9:U=E,[Jge,\C':r<[؞NoEK.7SJXؠ|Levp@[^η<:4{$LXp4\O!c\k\@zepªDyjz)rުcJ)gl+։CIgOG&`\ SLtĴx@xUIcv'ssS _C巌U"DK#Xզ4sZغ*վCeQ D]Jˁ?o>}e9q9\&?/2夷\6*qQ#+#@\k1_35ܓcKpꘗ8nj=vG͆P7CKTYSjR<$2b8p#Dp\ Rg?INo3aH}mM0(iVy8(<_rjh06-Ẵ@M鹃\h&4]LnO-˭Crcw+׹t:j9n $+L$IyTRRHRs2l6?'+(Ds+E@I @$8P6Fv7FwÜ/s m_ru.uR?$ w,cr{pǛXzoM?V7H&T8ʢAS2 8i%)A9T Qs LjMm1q. g\F"F#8th Ml5-㑔䈦IeIQmKxd9,K7"\t¥%Fۧ#8/ NL8$Q>+58 P`u!%6:Yx 6@uF>)@ErU1F9iA|%9,- =.DA4h0q+&\^si+cA9T!k/:teh+W8Zw܁F" MJ N\zїR^R@BXR"KN )@FĐmNYsg]>[yVu`!HХ)p:R\"Ub;MIP\"(Xn+6'0DB-ME-;ci]QE/< 1G!JiYA&A#j¨IK!)\Fs Hzr#%Źk'4+f5yuxk}/qT3tvB q?[#I9Synk 8b1T N%~fp*?pwnuV2Bn}>C\ qF ! ;%gWZqN?aq/g*5_P$Z\qDgc.,'Cr D [_boɧT_6PXOwCRJTOTzۼ3b`|2p>ZOa?9. iwN\R}cB/4~Ϯ5rQRy[nt=f=B|67szls{?ލ#:oۍׂe-IXڒtKgw(!~<5@gmvn:UcduN.kuY_avllj%ϟ?}7E?Ym3Zk?{{QJي*uzwn7߽>|~߿ .ϛwo8qQ8[F;<X*f[9?Ur]gXLjPT삟: DqE#T6T' ; ˃˛ן`M5M h71ӮiKvi&T*1xICzǧ}&%yoc'l#Z]oIL 9Lv2&(XxNIV1Hl(ߏ h 7' DqD٢ȨShL*o uOM. DHSvO=aMiǦxbRfM2<4yVa; 1P&q׀ csy]? 8ȕw˦zzO%"X,+Y𯿘ŐOfY`ڔ@Y=QOR[F\J[Hu H!\co~9:k^,@ ||MpY#!4T!bҗhe2()%,iԞjqʣ+Ո+ E)hey44*` .360j($mY̦\KFh{Ow`rw(o?t"x#~-.Vs*u>lݼUByb cQ`Z&I>4g3%-CAQ)b%+a!nxbƶœcSR9{-?7ʼ7@bxҢ3'/,&ZvKk6iš+ƴ+؈`` 6#Ym7kTt3RCL|V7 sJP$%Dj]]e*玭+RWH⧣2ZԚW@*5m̾ue֜zsF|6#Z]mD.'VWQ{mͨGʴj`8?㶲{sC2Kۮo*ii$X~2jE3TtcWәJIZ5t>D L2'jp* TT?{Fe3@c'Нg,UU%lKee:8U,o\%BW<]OJK+!aBm1hv*Xf$HWR!V; sZCW .m`BUB);WK+m]`"W{+ZjOhɻڕˎ^ ]i -EqWZBOJѭ >ba]0o(JBMᱻ;>hvmͥ-NL8{NvSH`N]dS#V>:lͥ-IyNݤWScz= DLDtaaka7FHR'ʴk1]wO5*c$MHTx<+0AA-иYY ^@5Ӣb/_37$7[ Xh6>eXfӧلRu4i"hM"UQ[*'OW f]@b0c7U+i[%nŧNW %V]@HViWX\"BW (%]@Ʀӧ+LQ{|W n{|W UBybч]$>ȕ=tVaqtPR +EaJ=}P\BW 3h;]WnhՑPtb x]=Rw>cS$Cl9zS_/^e%[ tPj>feŶ[7z^majJ\413_??|<3PI_O_#Qʬ1!cH03D S"HCM龿J?˜W97p )r0~+RQ,~K׋Q247%)2rUd~>XxmmHR<͇WW7igYMA_-~>|-vۏe0.|KWۿ/srA68?T4:MCE-P^ <fKPD!L8_W46y|:IvPg7RWxd!R,3 2""&Z`` Xy$RDop\~)ϒ G&a,.bBQ=ˑ jD-d(7BO̘G?Ͷ)-{ j<_v32B tpU[r $ ymj]:Ry2sh<8PFc`[43A@|]w:ͧ*:[$R6=:q V`E`N8 ",Z#V ,Et`1VYE L̨3hQ) j`PEL#18z3xzUΞq>{nkr:Da,LUod7Uޓ>HJXu]v/HkLoS@nҮzoG`ȏ̰A{c1 WY pl<+*yC M0g_ϖ@bR+%2fy?|%${اﱼ p(<޲$} Yk;|jۀ]"gML!2 ,%&Ky~]Ʈ|}ds=fOB 0ȴ$9)Lc0NL,E͎ =!"}:))M,sPFҰU/-Hit^ YjTNKr^Cwz]N>ㇳ6ȑ@,##E  ;+"LDB$ J{{cwΫdIa<_/$=:2hhb f~7Goon4! pHphKeު.8`t}{C?ϋxk1 8]m2Ϙg| dSp$*t ;_wVX qyA.^$Wom(vEp ̔32!_lkW)ẋ3f!Erf4y;_.߾l{2B(z*?ߨ[&޵@JM 6,Oy fE0b>O&io yB0Q3[ԛUVcaO[9Ry>\*δB<9\h%0eRWegTz)`B0ŊNp{_ʎ~?5iixV[w'ׇbMB@&I{Kk+tNol<8 {Nv$oTWT>{Ɣar{Py;, <F=yL.`9 Ppϐh,R/diocbY$|R[Xd4%i&ӝ9@"p}y]C 󍍕\)mAxʑ?00Q EAIOXk9YO,S<φzijcX&"8cQꨍX`0PnuzbD HK⧬Xx"5mL6nxf8A=mNV0#(6cFL4=aLv^<\{#N xH+|ZAZ~;#=(cP0PpRW.Ojd~sVk/;^sP7Vٍ b7ת\-ݹPuP*–PPFKdLPz_C#,ޞ7u}73<_6_ 8bkno|RYOl^s ӆV=l&l4hZZ7Mky/Wt"84Mh[XK13@ _+)&"t[LpAQC0H."@ *r);J:75pi!=InnX9NT7o9~fq̽W6@Me< $7.Wwr촣hGs^ti|< 壬 S܂B'Y|j@K12dpM̘C:,Ncʠ:-Y VQT (\QɆK!0f4Q{q+j$D5㉺+q Y<'HɎ/ϳfzx*ίfN;G bd-%L36~z0EF3fX?p2uOء_/v0F 4\`@}QJg8D)U `h‚"M GBgO;n)y_UP"ڃ}H`IBsܢv \RٖvZO݄Rx.aHoAypal89weQ֨ੈVH$Fe1XpD1%!ƛk;?d&#QHYu2Lkm#G_E?m|?{sv, E[YJraղ,˭XtVw&*bJ!m2'r ^QٲF}@8hl}[d%$$Ԟhb"2SY l`L+ ^%UVM$ {@5/% 8`歬Z::Ekí"aA:qQH\0`rqu(G;]00F#L9'ݯJRVoxr^q8^W㕳{KubE$X``"Gd\+026O(DIN5q6cd%68TڦH#,& \qUȤ֌ft ;]uX^=2A 5guFk},B*Flkl CЂ C :;@K˙!JF0ۣɬh`2F!EQ}*fY(ʛ X\_ӮyXc8;Cڦ=efQ%kMi)d5l6HKi :{M JnSeml I#[,dB\ -隄 @BM#(b:1 yXF}(CшǮvm{ыY !F(_gW\^ԬczM/pYYRA%| 4A@3t2dD%f 9D(>θ@1T*73b6au|Ap}3kb9.AH%GJYT!hYd!$M渧0GuaJTct(a}c96L)`mQ X̲4YZ0s||. 4OƲ,1kAe,9+ Zdϋlv3<4憔lg ~md .kSJՋ!-:>lp9Ϧ.PVNp L=.z|yq: OB%DbV+[ GC Ю^.O>ܠIThW3q2wŰHzEQOfR>u{Zh&{k PG| gaޜ]ėQmHd!ɴ%cRQ_(Ppv+{& 1 Ye)8F&H/M;w낖eE'YD{kYrMRڲ,k"76 uE(fE;C.mQt眭7vwrh˩9y[J2\PY6ɆT5AuQe T`!@8;l?p1,. ؑT,J9UHW+ ">OOo>CgB͵-=(I}c9$$RL&aP[Eֽ^'C;Rͧ 'kFGmΊBBkxZtJn;\Jʼn238f|zپm,Ma k̠,Y@ lJwG1 !@`Xkc)i 6tp&FZip<mt"j-s6d!jS\I=? s1/Ma ry(N|^ziNMҾniD r j$CҀxɱne&#&}9d.ڭ)?!:j`⚉k"_p>gogM oQJo͎׺H.f^\8ǥ_vpƉf=~: R^Iy>m?>QjwMޝ ǩN2+>Y)Ӭ#C8dK&.ѩ5Rr Ga68:IW{ogE\GN0,f%l&7]Ѹ OϹyNB44Bk{\ZOhB/_^q[\8q>wxw<͎`˶揜TM_N ӓkUM(npONjk>.;o\^{R^y4[v92O?\ ~bl]OڭI{:_׍XߍkYe$ ǓZ?U Ru|˹/'^ۗ\wU b+#cMJӗ3L򠨍c~&isQC<񷦱ḹ=H|7ߕo99滷'\ohu$hd~ޞfʸZ) ~8};5;Um^\99Nzگ U߰hbݰXa3 y3DW]Smҩ-n] wk~ epjĻ1[xS>&%=]9~6R,CmXF1)LJ@E#XB&栭 F@o#}ngC\Zz~;DáoLZf$ix IނdTT򥰡cJ E=Ntf88E]Ď:V(ֱ);;_G K/wnª>lyv'lgQK(R;"F& F| <ēNah}!Q3裐N g%h 2Wd`6wIهȜ1rϝ҈@&uxrDA+^ *e6 q!8TcSN仗1c˜g.9lu]3r+ }H w`mDxBAL206! aLYQjHţ7km ? /m*[}®nh+'.RW:hS]Mߙ_Ndz:x2xes0)YjZžץčMǖԋð8.?S5iא4 cZpݼYr2JXͧEd5,PTJTyUΉѓ4l]ą+Yߊ%o sȗy TL3eDdK2L:rKW.>:q:#f fiP.}[`G kh0mZwo#릾ߏ'kB}nH,t3>M|mU.jņl&jw2z#ƥ4N19Y>fө.FaU~ qTOwb:{PJFrt r`=Swa(4b#ء H/9JeK*yO+|H{T"CbB.e& t[g&/ϵ%[rXjVYn.V~K sz]oGWc3R쮍ur?\ i1Hd;A!#f0I "=_wU`w鯇Ey(65W4b"!q"1:1IDMʒTr31M;aP;db$)#XY އLBLɈ-RdiY %`p^ĥ׵8ӾRһ.f\2|ܺ+F}tx#M S.f^/ݳ*~R|gY+nbŮ8Qr m>Qn|<i(THK2FQ;[GqНR:.~ޝ-b!agFeZqCw @E'&:!DFqJs8qgsW9OQ\ \l4 QXiݝ3dH6$F,4ψ!6Ҕt Ė<lI'/twnW7x]uK)QіQl7[F̗-VoEt[Fp˨R]a Ƽ+ˉ/t]#])agVrp/thM QڎΑ +9?hmA@)iGWHWF !}]`XSBt( JŴO+,<+˔/thm;]!J;z6tj]iT'F!'fpO`3Dt +րXGW;PAVUeEu 9)* i K'<+xm0ϋ) ) )a}fD]X \է10oaqwS`xܧc⬸qxUt({^?{,ZcIErES˒y6FW`! Q )$ m f 8}\!BL.چ/{6R"s*'چ6u*%l/d\7g\+w\40[Z 'BCQ}kيڭ_C,"6'!<EdUh4H,̈́{Hd+٭JQn/!S+:!B;n)egȈy(bX.Yn}0R PMjrKjJԪK-aj)nUB_Hǁ f&+DxGWgHW\)äGt?_ \}+@km;]!J}qst%ܧ`{DWZ_ t( JjQBƛ"?]!λ:KRZ*]`M7tp$m+DiGWgHWP)|+m.BvDioϑ A5̟` ]!ZeNWҲΐ[ []!\M ѶrWχxMs<ҥn.'dBPhI[t5eҜޫlXSoh؛t3ViހyGӻگKWX ]!\&}+D+i QxGW_5]!`˽+)kOWRҎΐaP ϝ ]!\}+DkT P ;:CDr=+wpũ5CM QVwtu>t1{DWR+{JОjY3vtut)^kEWWxjGRHW6X+O њR9ҕ1Rz :8Jd|s3҂W7v>^8,xx x7KB% "Cs824HfH#s q46(Y>yw[2Lvl>סʦ_a/\߫ɕ2qZkZaIh4׉x}_J|&SQ6)d#Rnera~._rw7G?<:XµVSY#QqpAvBBYk ֺ!ߤE %7b"xS,{ {pUUqk=..U^A?X^V6-grC6^Btp4B#ԠۺH~,f`Z|eSxv^t:ۇެhFzSh2$& ӧ f,.딎D:}ˀ]U61$UCj@J, QmVbKxCLxCBk]Є\9]e;C488{O.p}¶)&ZP\P*g۞cp9n<[B+_<`V)yF 懌\rhKFTK#t5wO86N)pxsjGs5ݐ뺚OfWs땑 Qu>3[6Nx ρ+9hu!J۝{>{DMы (]5e=NW*zzЊsiRD],zxe9D )T sWrji(j{&6r 7P=E4.cg.3lbך | ƣ7Xe[q^.Q3@v1웨%7C0C`f"i(US*WuzR7k}'nB'Ӛ;-iu(TS)9 ɓPpmC:$&"gIb(Kb;q&jK!E)bGLO~o#(_R ٶ]j S H +a++Dk[OWR3]!`-+k/th-m+D)lGWgHW+UJۗ0W_ v]!]I~/~r%b7tp+Dɻ9ҕ UBFBWzGK QRҕR KŽ+kt h1m+DLGWgHWDW(rWW{]ZKZFQҎΑx,VBڛT-z@v]ɚbA~o>'N7COʶ )Еjg+߲* (.r&M'F' U4Դeŵh|hcwU` _F?I <6^KoPVz/(Dvj|7 +_]*T0?k*^%Tw(jnw٫{nx+"y(ID&",X s245++-J0kp;i^o`~?QaUUo98x8to}̯Whv"[||xA y2u_<:KmZsbABɌi3bBeILk|q28uΊt g&moh3>9?oڥwHh[MVXLg0ߡm)gOe2VUbʞ //-7IfUJyXƀDnyjYE,y.I$bnc"5U4(3 c+~gͲ0.$$ gv1LJ %2YΩ!DrTNU X& ϨIJ`*=1 K1?dؕڛ`G_ONbA)/͊ .{,umgގF`W/,>=NO,@R]5ժkE·tp>l7Uזнn4n=]4yY]>,FI-u=  ׄNgշĹQ\Vp*b'TlUT*J"kl3¡YZ۔&,6j']2MYY0E/$2ƹj͘L`Q z ͚zUXOAwejU\PZ>hU,\wv>D-?q_qݓ1Huq<.uE"4KeL "~cy⢲A q`}eE?x򁎮 2}AڷZK1g3dvMj|0O՗gHUqdVhѦt~&a~t2DL@uѾGdƣ16ܛ䱫C}I#Z۞A]`4]x1S >ĶXF5ypņrmմ|ۺ]qKI R+,`ͤ7Vrov ZInb;G+VnQq+BOac\TqcnBG ߲ɳ1|\R%afJ+KE^ε&ib$>-c*h|TFK@oIx=Uf9k]RB4e\2ʕCGP +_ o;|]wQ?{eVquA3i2F4֝r׻T\}lG5\=[^#QC˄[-XI}*%8|]Jjf MUj5]*I%{X0,݂⯍K. `nVʶnR.t; 5p˦ą-e6 3 G'lunUٸU0"f1j`i1TS ]Z4|w>Tk8Fe'4N }TiPSpHIh]O hC8%.A2SF|:. T'U О^Wf\-:ðߘ:1Wu_MWY5zlβO Hy߾Ta)d@{'U @0$|ܸ-qn#&7JȾLoJnnGre9EJ@Dwi+K-ޖ qktwV;J/W7~XBc*6ޜYLmh?.'Uj5ԫ˺RY72#.2Y]]sVȓ <($T'7kp&A)d09(}%1Mzhgecf/Ƨfԫ?啲uWaΦO%7OyJ:Oc Pr(aJh؁+f;7p $$ )K _pv9n :U`Np ؜[wN\ |"m7Ȝh`DSfsɢ$ *KOM6;!M5ٴ7MҺEM&hAHS*=;\t/^Ffd|x?=Mg9ʛ Fl@1.2Kɼj2̛Dp?fL4 ˴9jH@f-d ‹e(aby{ZՊSXbFʪ*S~ J!PNkHN= FؓNvb73} ZLuLv6;^f9wa^`$*Idqn0y@'$0J3`JQ2c Wt9X]fuїwPK*` 1K$M{Hi/SgN5K4l鶩HAe,UB,W 1=U @dÔƅtSҹM<'P\q;,/gɲ怶ž"8a8eG~r9L-Z`hR'fe:.ʰ3Nt"F&'09!8Ӵ6%47[`a '9%:R H% ΋u}S9>J425%I.JY7yYp^K m$kLSWL )F ^+J^ҽvaD5@yslV Y\8Y^- W^itBg6kKŠa6/bf%;Q4j!Q$xFRj"USPs[m~‹D(R?()%݋?0=Kw##pf)øNpi3c/GB_12>zMtǺvVϫ *nɨL_6/)ʑDEjgՑNOwW)wjb[֋J VaNF=P` #4(W Gm:fV?Y8JkJun/b[]ivŨw A:@O5/wIzPj$wI|Ij[h$'tge(@v蒺as=w,#N=Fri"9 ZVM@}RY0^u#N@AG.`$U|n(b)UoQcW'|q^^S9}2m@b5o {/7P_חqM:nrv3Z}fD*W/{}f'^Zs!4u^69uCy<'M]NP8p4MrT4a6@6Q!NӨ'~ԧ{fbU#ێ%rK.l];wZh;XVԝ+v#)%(Ieig$:Ed%D5sM,R^IQ')MբL8D:v޶κ7AJC3_^k!"9AزrM70Ұ ѤeOY,΍Ynf^4c `hi}`{9 gŖU(5 g 9`˺Hlj4t7\݂[{`ūIecY$hIHisk^0jḮuT[{A~IQU*"4T,aTP%M @H49qFnƖ»2!{H)Vˢ,l|pK7\Dsp[/$INl!ɺKJ F:( ADt#ڲSscOY-X[j*lZGbs͹,.P`O3.` DQ-.Nn9N E81Kib60!| 8(gJ&m,X&!m.0<}K)=l`7׏ɯVj`О71pWJ~lnm*Yf1͊wcjiOgi5~FāǿQXO.tphU3ݢlβ/x}Zs1O$]l_2D1"3w}RQ$. 5C\3)&XJI6eYgOwEby/r:V?T0v U5Y sNz [ZVz: Qm04UBDjv5ary^zd49|5_.^Ml mO8 gK BbvRDh-@*SG8uW+EBKt/dF(:޳2?187̮?2'& ̵>1=Qu;/lX)dSCL> &,j&W-@}e}q|_g 9Mߗ$b ^8i'JE}6Y?狫K5d$]#ߤS0xq @8n,X}*aEɲH`a?24YJ֗_k }D}Tk ʑ[{#cOgV)$L ϋRL`-N~,C'lξWIeAJQd>}8%~ LzGy_ի حPe 9O?,`]+ tt[^av%DPN'*\f!woĜmQ{4^ z;0|70%&[=nOI'9uL-wu  hJ@1&eo! 2l29DvgƒO|k_&[73>x{`N+{DC?q_,?;z ӇU=(NAWף H2pܯsKL7o9pe/XolϜGsNI!aj涩ȝ2+K+)>}|wA$){}73.YQTeClϽp`Y WY^-H)ih)gzǑ_I},0~X`1@Ovc RR3}<~R>Ҳ-ٔ).كAwUV5 F_^3WC4QzT6C!Fu}P+rʴ=2/U0BuL`B=QJH R9 nb+[R@2w`$7_*JLQ='sPYߥi řtS"yxnB*fcK%!>oQ Q~NUDƆ3N`<RlRe@[nD3Ū^/joz{Os;,akyX(>A6.kcTy"́5Q(Wo,O^s(qa$F壘D$I}Ǎ%5t-[]3Vf2M+z &_?2Zl6ȿi8K|MlMot1V1k]?k[2{זL_lRB)I!P ,dXahd?8XL~)g"l =epb?0^G(XnS8ƎiWgO%FٟuC6h~PI"x[YJ{{yl[ yVlW?]O)=Qי 1+⌎:NA=z$E ld`)ϗw  aUWKc`>N9(;I-S3{RmUTO*I;oJ/·]0DqgWz%c&Ԇ;&66\#XHGxW~]yz5sG0)b}< ':^ VǏW e,{g‚{GC;&0fΘTf +ylv9%m~k30D$4P@<{/6+kLvշjaQ @HJŴxW^+Lrd_2[),̩Go£Iy=f/=̾Uu !;Uw~nOpF㲬,g)_o|ì0mJO‹VOC!&l̞ Mv_uyA+%bC^6@CZ㡎+ `rʓy aG+kݨVΡ)xbog5 S__t D˚!0Fw+~ Jρ\W aW9 WSP~l""a/_2VyRmf jQbP"[ԺK m]_&K;}Iѕas0ΩN),OGa=o,OS`Oj(8X=U5 ^Nqx>Gl?UTZknthF P '@r $-mJ ][+삽$ۻMII @ GQ "% Z`9O`.N@u]@y89M6SFA39 @i&=pM )3Oظ UA[@>ZKv I~/ݤщ*H BiUnAM'NM[ə&)zeIERp*EIRTe&!@5Ϲ6\d7aXxڬ01'il|6\XwrN" w>,o0ӜsW@9PY­c)e]BxA%₃7povUӯ'|C>C*in4g徐 gzzh%c  f.j1 _†`{棢cOy.~ڴiR @ da[ ~=+HJVj44 n #x,Z&DRq0Zޜ jw}udsag(~깒:9-(%pm7R}SQPH˄ K]Ժ<{P) cǥ%8?X2y2/KKsXoVPD$\B6KaUe@BdHPqKq&N2\d$y~=$u(B0٭aț4e 87{TQKl>̇5:M`ܚb`vSP`k5XAB-P1K.&O wrwD6賭(TZVkĘoFy#ssdvUXgbI41[XD[=db"{Z\D#'-Qd±4ȨsG s#UeHw Ñɋv[nKoXMZd8㿤i<3JntF/_yᾣ*Pd RwқOڤUrb )xZ4F&D!.2-58(sXrj=V\Zf. G 5&K'x[T?}r9._ՌuC}׸)旋[Rt@)>GW,I:00I$=XP0,-=h@igNP 2*֏nhU5M0PmGt5G̤+d?Ud(bA.@9s wPa ڟ MI:{l3~ rph ]pT`u\qx}CL$c ,QH rNAg}<$i \q~*.+Ѵl_~ڭZ/ȅp.ԏ-k4OS f黨"v< /X%g[Th W-y3Qet&Өz6Ee.AZ.0v\nO7āu .d:>XP@:]6 @yN:*5oP݉AFTLo:~z|b9 23&?r踜s=h96H-vm8X+c?Ը1m?IzɽZs"KI'\t76=Qx 'J82.gĩ2$Cx" 'ڔWȢB`"'f77#ek=Kii׷kvd =|Pp4ȸ6e~;eJD¬&m \BkƿP8=gMc&%dT:%Np)cyVPc o}Mmj-׼C1bc-'v%+~GKU( r:8Skk\8떵v{櫀TK7cƂoʭ> ɫ;1FϽ`1".NUHz󙱔 sa,8R]&֜TvU[ɳaP+oױU+*%X)t\>`E:s",.Dv76~DΉ>pqYGn\`Ffͷ\_ Sˣ^us}|v.*0/P v[ ݶA[{O~*Ä1`S7_y~*騏*z,ǔ֚3nD o҂@B ]4#/f2I*9("9IYŤlu*?S2"9.p[/kԔ=rBT$)@eFA ̘$&ߧ]eMkI˲3Ѩ`obv?m>:?`) d0o=lilϹk&roTonӭSP MLfb0^mݫԘ43X@X O&埥F&`r'N@"+y( Boz>tSr!:vكQ 4ɄfFK c\NA^-l'd0r{"L` 6L!e4˜!g [7S< =^:?hM&0mLpD@4R>9O`.N@u0*gJz\ǭ_)28,tv*]]í_ʮAe[Iɶ^{/O䙿kFLH: z*v"hMS{{amP!b q{(ֹ,3n^gx#qN/j@5&88$9 V%il[$In%rDF= X|+X(p4?TfT)BdSPV&SYIsՀTb~{K'E y?DY&Nכ>"9vq[RQiKn65x^myX+ %4Kl67e݆&NN]]E7fMYQN QGeμqfMi Y-=!_r4\mJ$r2FF j2Q!X5-6Iۣ*(֧cbL mP 2G#mP9[k}9Nz~,ۗSeCU Uawf +A,fnaBBE>=hxEFy&@)FHZzW6Vd&1zۋB=DVBAba 3n*& e'uBW㳵JOp΍#m"veo13K_H4cUanAQRTkQiڑo>32=P@(!.IGhΟ> *) ()Jfp1( ]CY`u*:|O@>zsO/҂vӷMPh).AFO=T &z6PD`y\O,M7A%8KI:Y5"sKb~YmNftE/=ab4pWU Xߵ5t-<}Јkc~æ(ik(x `4. PwJo!-4*+ErIiPU}Ky! b:Nb B u,%N˫y$mYd ͐L2`Y|9o)s-# ;hmrlg8is\[.+e4 ) /r"A#PsIdG)$0eAH}n0p,AE 2.lvmQsd,2[ _\oKC2Lg봻+`wPqEJK+V/nﵕ9!PͰI0WԿ7&1]r I{9G~=B6R:B7$#]u޽o#ZX;Jdzc92%ڐ?EE3X0a|rqCD̫}ĭ)6GZR,_ 66gz|SvPB?rVA%P(̟[WTf-0ُ"3ōh݇#Q9b (k=4yؙjn!d= :hV뢵1!\! Ʃx$;Tpo;4zŠ;},˜%#mCa6.rQ<4λWQjBԜH4:0r(0r-5YԴ"nxeVJkVᶜUHY$lpvF?.h4rٞ_)+^}f2%%{<:h1`m&/)ӧL!sPy 5'4m)Y[Q66=TSwUXȀ94r`ﴱ-׵4tt[7]MwD${!%Tl J&}R[}wa0̽Q( -'ONHv'eԶYDLs5{'MzQ"CVg\h/p>Tpa:vޞp58L%o ~@(n,4xDP|Îgki%)7Ra-umk^hLEc+'rVqp(h! ntᶯp˶С+ ~l ަc*h!G^ )(<ݯ&V;2̫x2(qj~wt<-5JWU`uMCZU)}>8 vgo]-xq2Tplq@833/"Tz;sekJ**mɭQCؚ6v<[bטW՝01m7e݆& W:ㇻ(.hPR9M]@ J3 tډh:Z}|yA=չ޽h-:bj1߳=m@` QNs}]hZP#25sI85 krL`u+1 $vq!pIx+r2klg2yf?Wr?+G="&\ETb:Nb7 pJ($_Af?v%3KV_E%!ũ)QJ6uVܩjb!f^N^{ :RZs̏#LF5CZ {יH:0%>0O;\Dx,G }\RȊ-#8ioLfKNI(Oȕ82M^gsm J ESy|c(@T!_RI)kxmw<%9`N 휍+ogJnW!ȤZv쳾O/ģY7n&yx:X_Lc9nOט4TM׌sXi^f1DaS%;R̶:n_sh4ySּED=_A`g'rhhLUꒉrlg2-B;z [*gC2ˤ*H -zVCYYfLagwfv:BJdGA= L7h}&i9{tm3P.U.ҭ*M0Ǐbh3IԄpz2PfVK4a]' D@(m.+IjJCxG=6[dI-#\qSpaE6&exM3f/|I"گ[IyyX.̖ɹ1J^ތj%fy^TfY9RtB17 w-Y`8Iv3 oܓR<ͥL5p6 f~peᖀSϑނs4NQJTZkR?y `\~μ\S6MMūzS_UUW##?&Ke3.nsFC 4#76|vY#sH ,VA=Ȟ~y6.X/&/C=@7Ɋ&7q&rgSnϪzyUk~>c:|܏au%n;}u2{{MA=Fb1J*!,';J*ϑ`Bj")(Q+'/V!6ܤNV|0Iy#HiogiWhQK:( ôubyBZB8NZ.'{r8$TjaQӡ\D=Ƅ2IN }xT||-Ớa7 }Ь3cbD[L B|Hhn19|ZR樴!? ʪ+P-s?Z]z[DОi͂!uM6PVDJϻ(##K笐 L@2ö1812sNE/*ޚMM]@>QSPGeΐ()mp!3齽} X@0ah[fYd67Fl7G od\G+qSpxʝ>M ʃ6ۛ`>3F\ A0 Rv}R1JL^+c9,>"[;LPAEЮ!3BAF!vxzwj0^pU(I7OZ$p.Tíq[~0|SPDz=<C;4ѝ0A1{'Gw^Q bXEA¢g{*(ոz> Ğm^;M)ݑ6SPi9I 2zuD{vJ8աTTڒ[&cmfiQbҞ/Uc\Gӛrװyas*[35"b@J 6}|n;O\wV/=eGv߃] 2.Ԇv*GRCX1^A{𥆓.^d9 lԩPF騇 ~@m~i1DMzUT!uJ0,uW y1l*yx8Y^gT5>qͫ @@kM["9$uB="v??/oyŷoQi8!rc A1=pVIhrf9Ic<J­| AKMD0M|WdAfoҟ/=D'HwHvqřImQ\JMQQe]mncr4dTwLxĎ)t6LorMUkSn5@™-n6[)d a:yG~B}M2-_s[Q_3E]}[`1"Ȣ9eF<w|:"d6΀1&ۛJeM}&`{ߒ;>oBfDQ)M`Bpy9@W}@qX[^ϯ+R|s6B) ɴ]v;2"4(70vЂHs!*JF|$w>Ig,H)8dkSlwn%?b62t"J2sֲ'?U))4t0 gm8 \Άݫ~?,"0Fl>r<%J|3ZieIΈ=Z~0$zWWW)(Cyaѳx{ߊ).~$Qtřx5Z`{,{49$~6Dܺ&W:뀰fR"܏ ֘avbשkܪ)c$ͩa5y,iOt;W-&R6f~ֻm Jq*^XĿ>Aɰ_uYNu߼e5]tbT\+TvO 5LO[3֨98?ղAK:2y9#Q&>KZ]p'b_|i/%G|ԋ5߆;_迿aMO͚8.߰>eO+Y}w'aKVkS֯ƺ3\T{ y|ӏS<2)yEsԱh!@~Z  h=A}P-!=F7c[־q}`hM񎌘vWa*禶v k񲊼NSO(pP1$ru_a|Af'GM9Us[N pE6J]mqn'gV4:j ;ҥ>PHler\OXae w[شf;u.!8L/H6]L'EV\;™ zPTs@ A͉Hvt@$=z:L ]Þ%M>Գ@VB,: GrH|RW#ۂςhY&mi[Yh>ղCI$aZGmoܹ;zsUxs(1׍qڒsЖ*`GfW%ÚVTwEhw+RA~Z Ns*5 7sY DlR^_C4A:Ps"FyRSI@y=uB:I 7#^B):b<ġH!o/]b. :{WH8cjxzG}10+m`܉㛬#[<6'>9Jͨt6Yz fa-7pGlxх:>SٔHyW:BRR9Iϴis|-aj{.T`Bki]Ί-i' D}FD{Kꠈ),7Uw)5|Sq-ifoJM5zfp;E1=۱aoY7ƈO|˶C<9HȷLN q0M/M'YLA )84.$,V`L zhm cClZ:V2)z.L Xrqb&΃2'S1CrmfY=#(N)KP q*w7`%D¤󽔏| ع.:Psh`UO@nR`s0#2M s"P1ҒI&,P gdɲ6pZd 2$RKMBcIhaʮRӦS/a@799gTq`lhL p4j};%Z$ٕɔ0wgxx ~UU/W#M>p~ˡ6킥Fb\3Oq4Lu *O=\ Uw:=w!I憅nn݇Ȗ; `eL`ƟxEVH0WBER@bLgp{ovۑ҉`T{׷|TRfxx_x"ba, @)4m#p$.jIbal3 ంǷVn0Zfp"!Юc`.[A1&̃_ұ(.bmb*5(DLcz;v 9|6;8-t\kS."EໃD M^IR]9}[èeJ਀.*Cu:<۲~mLe e<2T~ܳjj /\\( N,9 ƨйHɴP{҃ iI)ܞ0>`_W/{@=F_K1W3bϧ!ܠfL'`R) m c o˫9^?\9yF_-n66 dj]1]:yA$&PD1d4Gb҄P*HzF3w)J$r<߻Y׼CًZ[ȩc B$fZŸ鬬FiC[F;+Aw' _A7/HόIpO8Z%9Cc[3IhgQ>ILܨ89b"+Gr3%J 0j?eԞqv?D^B r0{5CD_!¼vjMN.[^1C+uÀ\C|1*|x*QsUY4?4'L#s} E9T08i+U *V/;K =Pnj`5xTG>WsK>{+%PSfBC.72e#([vw!ԗ]QuGc) |>zx`j8h7wٳ5Y-4zX!<p~t\3\m 9$G\{12(/drXQ-hTKVI{q.;k`Or)|4j2Edן>OgQ9[{[w`;IW',>*0M+s{{QJ]PC}.{ۏ{o۾˦APbpx:KJ8&u2:#MIB$Jkm#9KNV߫[Np<, !jіDY|[O5IK#Y5LUwU}uA]0@@N4:{EgtTg~JQ3Z0xrkF7V_ia6\;6owi1QP̰=,Sw4R3˹%NtzϘUI'j!}]vRR {tj%fKմkH+6PԾt mS|7kC(*đY1-4T@5ڨ!"d~h .5Pw/'j{r郶@&JL p,v{HW tGLyI /n4+Ed(B&h ҂&JjP z;&}b +ﲃJ( a6MpąFUO Dd%Y_#QIZoXo+VO_lZZPޢ|MBCjOXC)x]P] l2}/|}mF]wcn Siee<;BfTN`7,nhbqeYsSY#e* _NO-Q][IwCEjŧޑ=_К= ָ&hlpb>cklm%-4PE-Dgf(:2FM(T ٠ aM-Q%w7cPh6Ik@Qm6k"&' A){םҗp@ߓުWݯ4zCC ZuSwѯD$j\LGu‹g7ǣ 7_C:Zp~iK «_^'i}6 G%$G~SG_fup1Dɥ`X$'᠒1c*'kFBNٚl3ѯm.ʯ韇h(O!Q?tF:l$,K\qԂ;|Ass~̏Gؙ7G2cooY%͖9)9f.dnnYcLy]UVEQDފpm{&}fYBUD)^5&޲] uڲ]wXJGMY3zDU G& #-dFxk2Gg-VK?7:8e`'BL\`Z:-ךbz9O4.h guhW"˔iYەj06]ų؈q7o S@*ʏt{4ɜ$gM@H5ޡuBJپO'i d5 84g} Y.B<9x'm<}Z0WSKv "96dm;g":͇ YԞ}f:33EWBRm;0TS;l%bܵu'` R[>J+,ڷcV{UȀ?NΆHE}zCڭ j֬}z`재Sڏ,~@rl'9%j֍_/~fx7d|Iq- ǒKcK+ݮmD*bEYKyɄm;I\(z"6@7X- {`7D_yS!3wBdNKmFm#=vqeYKf/h kȌ3] ={bGm˹_L5ef!{=A*FnkM73t>ҔJ;k#pir`AJ^-K<14ꙆyG9~=5lMPCizk!*wj7=mۊ6vHI0= Cm/@WQKNmMZcF~V T{G"zj~3Qx-x^6r@c^[ϴUE b+fMRٚ7 +\% ܸJ+à֬57>F;f>c}O;dp(ݻjd=qL}8 xڦn1Y1wW "cz/bQ>8FvT4_6D%Ytz>?7lzy;,}?ͺlzVl#H.[5?;<)Wofչaе5 AZ=pyB+ܱB߻+]gPU.Ս!p‡ WZwkfLP3caN*E5q#5eݶgBKMs8jF]'2i͚U<^+8'B ŕ_VtjEZS$ C Vlok9)%Amғ7Hx2\Es _Pdj賛QsǛ/_!-dJ G//p:) KGR1D(eXdV7)BÌ4"(wY+ML>ȀEeZ;{GCqkyx!pvdᚎ ݤGo9TGWk5/h.k'$v&~- MT-\TUZfHe$،UҲqCjxi_%‹ /q|cřPI!5j|ƗP}ǟjlJ9DXD@4: pRЊ󓏳ğxw]њ̠"W{MBX-+~x ɭxIxZ.Z7Zܧ闇YK^Qףak0ZUlpS)JiM!- +6xA`@)+6 _gⱙa^ j/ӓFGD̞}.`j J4_~pBk>eՍ'a)J[~@|2ln?~/4>l<Hd 8aLʳK~:]<;LT1BfKABM B'$9aIpf"R@(pPɉU1NPH)[mkjw}n2M5DJ8u 4("*֊T Ar-XfD#ԉ?*־@i,>#E r`+b}u {"1Mp:83L[DfWpwwr`5%3M_0)F]ɒ:^S3b dȧZN7 {?;/GeBѿC_$-u*./rz-~ÃF_q;4RN~1q'y8ayW?/"m V4?__w}#댖tџ˫e%_qamn\{-nt)MDƄ/1Bt#Se=/╈zf'(E$lsy=^{`$/))]z Hmњ۫;TBn>Sp+F> xKͶ8$t 9Q)I+M$x6IN;MT6$FEeSgzoZ2 R=Wwa$.$2S '+Ȳ!J~ǐe&F Y_0!>8K`X}{bTH@moF @HUgAgS!ΌւN>ؑ=}.DzAAK "1*BK1ifIsq؈yNn%>CtaDEv/_Ł]6'?x G+&IGщuiX2X3^[?6/3˟ww__\;oDˇc4pmgYų℃(Sڟ~ >Qp]ΤLN1-?b r)v!  i ~LՌC\JqM뺊bq&ʌCsC̱c]y*Knơ3INdh{zpqtR;yc=-z" όvTPу\}QdXI q- )<ݩfOg˯'[on^/(sz>m^./#}Tp‚An\Kn 3b|9+ 1LYA;gKiǧܽ❝!2jgep0*!V7;D6753yvSKy?l7eGQfGlu O8J^QW2{vg`okιuͼrg*H"82Eѳ1>f}0-tՏGhu{_Y6og5mͻS,~Hgcy}֚)WYl9ޚWFm9{v8Nqf|? 瑍myc <㼇ntkwxH<"Aܤ\`4o&GQR?rlU+3= 6׍DItAzSpɆР "E:a-YBԚbLT<.W*n!r>엜n:e%'IIMFEM{J9aHkCH9av:J@`pq:Z(T{nS"[1|fJ%To+lhtn!g_8NM{e`=t{sG$u  xG@U׹J*+"_1 U$wZFᏤZ)Jћ"K*ߝId<"'觬+4[Wj`$eʋh0Z y ,ސtkMɂ6Q챸' pκ92fG鄫ֻz:c1{,67{;sJ%*VXB&j=s(5B9U^muO WtA WE9wP1Ep]A:6l*eaEWӊ(wZrȺq}QkҰfN^Ŗy+fsF"V9DnRUć[Wxvq>tSVhTTd|7 rɝWށpZe&t~ˏS7zJg TYAh/cΙXͫ4sճ=AQC Ge{@M$Ts6ON>M";Kb?]8G8њWk~J?,9loNqhWcEL)L {J9b8P=50473h.Wœš6O?(lI]{Ұ蠗;{ ꁤPkgu/={@yb¹5 M(V(ϣ`RI>5^vfnhCzyћ_C٩ki}2}4Ef؏Ⱎ>]J:w.7//]f_lv#MY 3_C ZyZqU?nÄ^8yNE=v,IKo?0!G 諘Wq1GHG'm]0D T83,= v^F<8\E.`0PFjtq$cK#ߩGTXQP8~{S1Ͻ{+SyU,IMGO|jFinsx>|~Xx ˆEB|>:4QNjeřо`i8//t᱀ Pic|a Fr舽timƀXscc\k 9{kVv9ArD@aяV}mUOT@^w=P K+ٵeuByk_MM}#A)'~xhS6AxuEBzIبBEzVob & ~\ ]tjgCQjd4ǘߞLnj {B(xz?rV\)bǪk5cQ^«ў@y:`mŜ_899yŠ s3uw( ܃ءu`WߨƑ.4?odr3ؼ'`S#TJŻ|U)k 67ZǰRvZ!KxA ZIڪ=x糵q!0%: Zq`@[ZhuwQ(fvp/ԪR 2.J AuȭΎ{FlŜB* %f&uεk]R Icw̄Йu yىwnv} F$9,zx O-4ֵTac8ٵrYMa-Qn< yE3W <$ˎNJCPAUb-ٲ^Z+sybpٲJŪɻ!h6 J8fn~1Rѫ& (l׉&t`uL<$zN: xt=Uklv,E!A77P?H94t8;: f+R:?ǕI|֚9' IuaMc}}&1ЌIĤdIՒ9W[`{G!zaItS{Q/,cfS,XMσ)ȃHw-DӯoJ &sJ"1~֯g~91R+ퟏG`ӧ>'.>c.rhѴV_2?,Fxr~0J9;diqmfeFmAw-?P/ N{;k" :oݤ*i |Q$͙eNVx=am-Ζ2Gt4D;[]\̞sI};L}yq4GmOnle ui{,UDwfe J%Ic[ 8D#E£b^Yi/Cchի5yVřJ7dxgB챺ht蛃@9K(lȌy4M#PRBZDtwkB D D`4j[~;gmg$WjO :b;Z%kT"u]A8q]nr)C2sM Z,ۢ"}UZW@eШYU><BQmu%ʨGz*"T[#"XcT$[\շZ}FaiڌXkٴ G҅VхǵZ8mےL>V*CӭػTQv*T1ގ`NNuR7M\|S٫&QUoUOWugh36k)f}0zoU4IXX"y+5)6&fjm6:Ψ&S"|Aa++KVUtEj8ųUKkVDH7bm]")}U0<}7Qo؊@LF?NU[FaE-;p*~A㔓c6z{5y[:hݑg^3%@@|^M{n"ȫ$?(=2}.[r&_^=y5DL :qw1YutQVu&/{5~!b,AC(>"kQDԳ)Ktcn _?LD*b1Rg1-ZwG-P*IgʖHc6O3f.ЀݯUJJLOs·r0.i$*y؈{WY2nS!)N-O.@3%2Evq]'E{L͒cL4hdV4> ty/+L,"ݎu>^y{b&< 9 UlŖUe7}pg9$ 7Km`&f^N5gR>܍7 ,\lGp5n0$ܤ8Lh)Z19ב|z#F<fx h(˨o> Ur3:2w);ٸ0gl=NpƞbR>$b\J6&" C:&C `վ/>vb=x1@`ŵmĩ`0Pb3 BSMQ7>SiZVE#牓AocR|B5vkď>kQk)*7?c?ueDZ>ĀP(U$'e&~{o,-5K,QGn&؇^Zy4H->kHYR޺c y/J&n\Neghެd8dpfgG3&?ь)KZӹg3wbϟֿvs cv`rp#=f!J{*vDnf3: C$w' 5B#rVM3PZ H5L#{>`Xe8,=@z*wrYL^16D>9]ծg4s=R)ӬP\KMpƾZ%餡So3=a L\ܔ%L_d=&^IF=jo|M.u|J%zR]ⷷ"{2E4Ff>yFj/ !;8A tAP `iƾ"tj@`]%TSS%=fC:1\Ess,cvǢdV[.w;lѨ 4Q?xl0Q&j6(8TfїD̔MJ umd.( śIgi%tFjX%Y9f'mPLWA,WiQ֗o 9{d)8j$yv!9yH*pRZ3Jy`*6 Ldc%I 9̃>O,6md5c]޳`Q0/L~6L~N 5;!ͦI?Ʌ ZT2S1Xl{5lCԆFQ :񴹪bq@|g>or^UJ7PJ;$hrd:!&mhI{ƴnhcZ5,=sd>[ f"/O/1곃T,Ol?١S7R`jӪB GrF14ɧ}7x@_wtV-u_םvm˛w!`h(D=b3&V|LPFZO0Y桧2USjأedqF2iHrU2%p-mri!G?^osIfZu'\%I^8/WbD E uKœo+[Ox)`%nɷ\z РRw/{ĠLܾuMyrZjNIb$}NnO.,SmU g6[eMAgBk|2!V "K1A&n.r!TUG5;Jyg2ٵגu'N52/~ӈ y1cܺ+K91f6 Mf8s1 E*', 4=[5dF4WbZQw#hx(fg|w5<)u~1ymg=} oS_kTÙ|d~O !6i=%pͩ~f-$t-NKbB>:- f {a_,jbհ'{GRH*0lr Tq݊˽ -]2 +xzӔYN! ^`YA|iŕ{`qm^{|`BBq s6r0M*P7e$fz93z'}Ynv[|з5q%Hg\;1^0'cBԨ U8{٪^Ln*, a<_nĸpIsN3t) G8{$ѸG|,YPMJ:[nx8>#*EJLk/zuVqU9Y>MZ|iNfUM>}yQk(ؽʖ8옂٭3C$o<  ,rPӓ\ 5! (l$46!e`J@ qsٳK_K9mgG#3'0dZc;1l뱼ģn<@X#3~Igq rᚂW^66oٝ͊g^)XȟFYpn4܉qbڵ&."mjOIm;IĤbT ^yY6Nqe6NsRF%qRy4S¢3/6NqңψzڠOn{37S7U|Hf.L۠ݢ}# aJ!.@hN6hALΐ~hgہ6hk'l9>zaTޠݒ}37hC;FU'M6Uuv;n$B ,ζ`vߛdR׭ 훚vl OCfºلj6YziM ͎6z# +}/qJJ3cG/ቖZtsSI9Vܼ}k.\WT =ВF_ECL3R4a{sd)Qڝ.#:b:P(U$7}5gf؄FX5`>>d =PrҖ0hItDG^oK)T7iEBN-.zj'"а~C󞮌!Dc[iK:8ǾA`)ri!JF%KR8iU*X"!jjs8HBmdcy ia)<\FP}?&39j|ks,O=ذp/Nm4IJ97)Κsgښ6_Iw39Uij>5N8-"A$HEl @OCx%t F 1J5CEM=46i)zS\v`AJx =4|Vm,E62Pc&MT&&-bTfc@pB+4j&%{Xq9EZ D gXnn2^T?O3~mE,"RcDwRtLL5Q,ՌL2(5Y"d\6qVn(=8WoG{lU `v0D MhVAljȥB1o=$ 8DnTm-x5,-A ;?Mٓ#x׿vC5Ew|JHo4 Jjk6L2 ',~Lv!lPw9-$bWwy}90 !Zk6E'hB %_Ei&MW@jIy|ň\["R0Brno3ڪ;t6@i(YD$Jَw$/\,A0Tvy}- ?mc3i6Iѡ#)D$wd> 5 rџp=3醶OTJ 1M>VfSNf#_XHH3K8z/)sjU&Ք}R) XQ{3}K=I97l<Aijr W3;5>~I 1{r†4bSIc ,pIP0P#& @73 ͐/%1 E 2 D^0G|S \;^9v(N1i;GAfw3OLz@6Bm J F6kqk&6nNzPf"6Q>vw6Lo-[BA:?Zʙڟ"nɸ[pb\s~ Ǧ$v3n3VLF'aE!#fƀTT!:h1Aj:Jp.8H%,#bd' yYGn-xG6@Α(h`3rD>bx@vl%}>k}vɳ urT. D 2dͶSLPDBN'yFIB9'y؏z+;"[إǕG K0F90,%<Ann.@/ JTbBŠE"fS ZjO{g]e09;K]9Wl!8X=Oģa: xKQaO.Y.9 *g> ; MEӀܯӀhy֖HS)%*h?9 U8z>S!g2;/a6h}+a4>Wߖ<2`uFꆐ3ѕ<,p#Fu&tW7Fc{InD'6&3> ޯnՍ ԓ[JիO>Wo<c3iǬO-!9Tum2PQ=hqMsM=֛9!+{iuKV= ts?^8}z/۳JRRZ,c֒qh{b{Ь :uM*eM@>ˆZT\~/1D3NPw>|911SZM0?eSdҠK^3cb ] ;pW!>xIMz/9V_(#U0jv|yF g|X,6X<:(2uIܮ {Xz:j]Iw >iBۜq󢓞"FiA(<r^1 gJݹ"OdV>] }04JϦ9D3a)"±id#29vCrB%0 ^PâO0jg=~y(P dL%aD02]IzGkSy515TQEbxn<bd@?iG3-!gP4@8+n0՚FϦtM#Qř1- ۝F7/H5+b?7M+]27&dv&V"ZIkq[ݒZvZI8R?*WOV&{ׅy\H].\uz>+zEWtq5>0TMQCXeINjU(ɥR̨8!F|tuB5޶Ӻ gmhG7 *Zs22 R!=w3zѳS`hs>574̚GSR$ ˷\B-L'*VY1/(@U4_|:;)5 V hh&u4_O|} OĐS ͩES>8uh T јTq`JĢǮPqObq_ QRΥQzZӌs!k*`*E( 燣jU*|ij9%1TIjUO1A(k2@eZ2Digs%\d2MR!mDDݾ!=S瘷9n B-ˬ>M.2{ATP9ИlED)\w+F2RXu\sC$F@,PP=LL}F*LMUgLNu4+MCw]y!Ei7_yQELѓni)bcHxo)85[r)S/X} Qa= QvD9T) sb2HkXͰDU;dT;3JCk]T힀>-B1jDn渠D.a+()T/2jvFqD5;d4;<%Uy*En=Jd\,K6)KڕS5ZFC,KtM+yFˡ̟޼Ezw?pv+?ǚY%"_?*`ZGRHDD3}u2|M["O~r>s'~OHCYPD$B(B"Tgz#@#n;wG+v[5pwʺfun14Ѣ>H^ATC%@4I]o@=7Gn#?MUn1A ? DwPZ>DKD>Hּcjk] 꼧҆޵D!6z@Fv\_ԹgU=hai) d$HM3 XN3n ob|쌃)@8akI^Smmoo+b4EsɷFPܣDM$Sr"B8iT)yJ}ч).=Zۖ OԨ=ȑͺ k+VrW}u_S $ᜣacl2 3Y%Q~F΋gJ{|p Ofvbw.ø5,TWcF3tJ,MQW,N\b$ӚY jQnDp d22js=xεW*|§[xM'd~Hf|ڇk?RTde>qxyTɶthcEUrcZ:ùx.AG -7^ <"*l0⺿ۻᯉ9 IH8x?F30f˝JPH$/=jeQz>&# qCt8tE5 JN$[/\@_Rp gj9k7J90p0{79p)Sgrw5id^Լ &inOnEx4,\p2tz v4+1pe9,=3Wr,9MH 5b 5u\3OKC^vW|[ySߺE58d+Xjy 0-9)C+gb9HIj+2xF,j \^4.F1%2+Uϒ5x`:þĮ:y]u=v#yYxPfYjGM$Mi}T%~ּGBvj1If#~siLMsDP8h9Ul9&asIE2R2jJ2)FN^Bk2S4[)[#~ }7l)-^5c~<Ÿ;2,gE5Lsnɭ'E&C->*ϛAڼ6ū] Wn7;8ۈĬe_{gc0bUQ͙X8.zL훟_t ETy.`nwSTW;v<}jC6և=ݐ.}qۦaF^9@!^Q'CNI$K 8+!W17߀xSL%9"nh%ppYimiPB:Lk+v#8[9\")D:l/drGܫkokqnETJ 9ʝFzO@s[v8b{]OKk6(7-tܨ;6$p?OѱEݗi/IEtT1Hʌԫ ψ<3YK٠'tQ{ -f6 LዝΔ`Т'~<ˤvogu?th0ݢd0#ǽ]=8k|R{َt||s@Hga-,һF<D Ncd"3W+l%6~sFpƌhWUѐF)]J\]tq6sbOw/o06xy׽7PGVXJqa2y̤<eq=Y0RR/rYCqJ\IS_–]H?@yM-$7]04$l=6Sta@na}ےbAm>=`Ŧ]bs7e!яtji^ ٱ=l2-"\3ysgc[jc CPu/v|*7 ghbf;[^.8AH_BÏo$ Skě$cu9u[iI InjL< 3&pƁ)%P" )%s)N06rBPsR,LxBIb(1{ I]*1Dqkc]/Ufv{ OLt\yJ1QRߔ|F1/0+ :9 9KG Vs '@dsNUq(EbҒ=0&{A!GB-9lyZ]Y0D|Wnw΁-e߿]1_zTANtr'v4,Kn&֋2F-1<]}o8#BkT׎s7'7;ٍ>Fjl@I%R`_vgX~M 7%?;{ߙd5a\4W~"cqP5ɨIkk9~?m`227<ϳf-Dd[͞޶k/QԤ9#iEcmq4SSq @ Kʝ@B*wJJTS '88A|C{VE &)gtȮ<SNР}8UUTDCvHjshYIʍ0Eծ ߤХTvuC$B /oG[ 5#TCFNISU8cT!#jT>-ٵ{EDJ$R:Oԧ,3-[KծG;guU%Bnȯ+"x[$PͥNwV wEѻ{ viU"_'Sx\~U7޶RSEe*ECm]^pÑ'|ueKerpsf.O1.cqաgμSK`-b%垃Ͻ-|D8=A8ɜP%=I,:Kؔ0+ jOxJTfTFP= dWՐς+NVɣXu-i D 2H&a?_ZAfiY(I5V nwqg8R8Z1ᔷ3$UN2OCbwmֹ`ՏKZ' &5.)޵q$BewOv_ N!)A!E3(ZȹtU]]U] ^ZKƥe*BDE,h"g'ݩ;gVkwҁaMTIPf(>;NDhxۮ}kzd ɠĪ6~~^v>'U HUd՝r=&n;_)~)O{TI>,jk(` rfm{O|ه|,:9$H0JOz}Tywb?z~x}h-FUCm@{zwC5WKtbSyEF"en7 ;GQuX%Ō9?ʱj bø]793X0!xkv"g`6:+ܝڬnB߽_?_6v8Z3`޻߆I#&wv3emώ}GCSP'Y [Ns-33T-:837Iy?I=ńk`pzQ3Q(' VXE:q܂4ӗ7+ͤ 655@:'P2auLs0|UnD- ]Q?[sYZ)GȗIp8Q|]; p_YYUkO IQYB5C!)rĽx9sy&C=4n3nQx&GgErhbȣ|jۋ*Booț WB ]nW6XxK0hK}R.|;KG* ZCAaA Zz!=GWzs0OC=N̾!'DJ'ʆ2c&D eSa)0(}\S7dLA"_G|i 2,GrG O}mpj` f. YlOq:D]NmkDƸP\gy֎2u'яJ`Z<_IEVO$=FJ=?wNڧ $gzKVr4>*qx84|`kaL1VhIq>JF#5XEspTvE^ m0.rWp ʁg^—ۉ/QvXYnjF/jXYr~+E"uiMNG.tQBઽsJ1!"K$!g&ub!N 4L U/z@@"C3uLI?LM_q-{eU6 AF=8*aaKRD9` D +AqpXKF'4+L'rM [ tZ9 v\, (!>(eM8QLЁ@x,3aaK gF|?% FE Aܥ *diWI5"ɐgb/M1Fy-l1g5hN^0]rGj~m5Rn3QHh덊UN #aJu ?^FrR#vl4HP}[_4c L&9 @nZ뼥:M\ oɁ*Ż5h q8rV"pϒ,__*j% -fF[e^jP9SbqO,(}jo |<^UlZ1,6J$V#ׂ1%(n  ~AMTsL N{dGUuc8U8(ER@PJSDQ4NMDXZ$ j)P߈p -nDt9n<ȩ89CJ5`LC3*AQ_ouW]'QNͼE=A"}?˜`l>' яTBh9e!7Wӏ9J˃gu-Ce@hbAtd+ (!AnwYn]%ԪUNc!ϒ9U~B%vN{rqPӚ`tb'RO +@!긟~;B\J:i-AB139|CR1 Bh^.#.Og/UCqfa>'zq?Ɉzz+KťIg! L,v~Q' zt8),lj S5= 'oZnGh|1i-';,Im z&#v"JbJ⨉1my'~0ǃt4vJ9eO ,j]SpwIw?}KZ1W_.ʚ; W? ADsMӷ8 6f4?oqQ{ʟ=#3chUxQvrww7<+E3oaL!((#uޤZJSH)Vy1L Q7+c-2:PBL_bI \?`V%h#v%gg S&"`Edw(\ųI#F8Gm= (J1bRP(>P X{kA%*OI:UYlSO+@mZ;"2ؠQ,F'Am:˴:@Rq eW(!oR0M)T)(ݬ@+\ aN6; Pu;$7=SI+TSQ;:gtjщ2:qVFg+>D3:ؑ>L2qsa0 ⢖jw8[ Uɭ:HnAݭZC1tbNa3 Y،&/(婦+)6Z`uqzs_ޜ;^rV#i8L1 raI.K3 lvSϭnߩQߏrxyqzl"n)h)hPs~5Fk`@L"[0 nBRPBTiG`7E K:poqeTكi*\MW/qڪy޵n]$kn;M6:Yu{{ Igռ, M&k씻uNI~ü&l(xH*}zs*8X(aIzmҁj&F,K#q1EI}P(BΆW!&k!Fb+D(94tQKnGf3KuSQSQۃE~9O@myW]U+tIyFSz؅qѯ9@(0=4~Q>NYN4UؒuT;v n]QX_k")N XoS dd'+aKRe`Tɂ\t0 WKUܷ?.9 ?x~T |4m\? N]JȘtUhu1L4ifT_O8gHN4׍gP~0L3}7gZ >g] jBkىv8Xe9ZֳoPL飕,kQ2Jq["|EX ]EOwo?.ӥc괶^ST+ oyj+,+GwrrT4|su]rVJ ̈"=p@R2P =. dL-r-M iQO|Pyl/G,Z>{]$mT9z3?JMIFOҨRRf˞ua(| >{2"UB!>)=?&lL@]wc) ĔP89~[pN ;1a#ĉ3Z̐MĠ%yVpk 47L#3.w6}$:9uм(vq1V4J :}s6L)q E+>F7|peջqTABJj>l(!,+ݗDm3ƙk^y sqʤ2&EWo|+<~09&`Y.<Kfjm+{v=} ^lT|0 45 qVRdS AhgQ+Y9m *^\ͷCa]mkZsV x0$w ?:h d$M~?zdZGA)k`=@zh)Y^q%ѲYOMg_0)ʬDi3H cEXCZ*k$4ʪki듇 κ+=:I`c_#Mw)gRa~?SjEwHҕ3qxlLQup zu"?^fEHk7ACWUzy*+'%u$$|^-ɳl9]-b#*xeSwty/B*WfTyqK'Y)@L 9k\k0ёyF}gzrp K.xR7xΞ7bXRzNNd3lz3vדa'{~}ۯ}oG௺w,׿_*]eq٧tG}ӽN߽e?/{{_:]/>ϝY 9|؝gxW`h3Oeۛ}hw1[`J9lgϋ?}QMFwK`m jZH&vQuAIeWқ5vٛ~`uO}gyfw^7|~x"qʅ/FSF ,_|/{0w=ksۿv1gu1(x=)'Ww?L.ИQ_ /z^c}] V0gd8&t<}iwJ`TO@d緰L~\HOysXzσ7~&\,|W?,Ϻ1# x=XM"l]__Ta0.%F9D1H3>F[Jb?̷ ou" ͗a_@?uK O-Mw87s䮞eճg)˴:s^y[`6*6&s7/T8<6s` mz|G4Hӳa^dJUHSn MYDg y"jǙX'i…s.LU3I :JrB&-1Ql< ҕN9?q=앍#l-YG2Vfk Y*&޷JkVKesc-fJH&͔oW3ʨ'j~5%W''W8e9 <:({Q\tl`LM4yDib wSGs=scs_],68bL[5*NȨ5*Z5*vEN㛉bVpGc$D(# eӤYqT=~^)#fUI S׸~^0z \ɲ1%mW= ؛挋K2Ԙ.͕QuyPʮ)V*F]ʙ%T3x/bf#KRSi4Q2N#NdyI8+&,8aeyU+s).R"p"YL`ũrYfkKԪbhGLRUF-#ZjfeRy]U,-bR\1TjAI]Vp$SHǫ,e05^KwǾc^Ia`+hOA@(=Psg/o]'{+u7⎶YYxlQ2o(Nd efh3.o_ģ*bԘN0n͌:l(>%.`B?rKv7<~N`EC\u>(3}wkE͓x<4'1˓)tW{ckrtb+ޏhUj*x_%OIWUաy7g6TԦP&16MXjb}MJmųmxs =$e< Q3|UH1O^;+3zqӾ=g#K5yC7cuzIz;Uf%e8I7NlZ_ar.jL[f@> uC* Yi~@&M/1ru! T %K-X׽Ad{X2mY~nը(a0 c # %θ$l=rp7NC޸5!W #vX xap$=]> VgVŏrvҕۜDAcs!TDjMv:uZc FOWc,#dC E`뎘jA$%]ǤqKhO"^so@UL"ǯ".5d`ԌiӺX),^NJ\h.xh&Q|9w( n+P@!1pyӿl|6Mx Mφpݿ2c/MP\=&M.Qvt,В( #l.FIU QX튘?r9\ T  .FFXZ(d P ~@@IQ88 $'"FIhl,2_K9Mx&cX8YggFI,Npcg:.ZOCW>0OH `GP<$iGGDn7*z ل砄2SJ(|xmNaSxB9񗘶9mNaS5p]xܖVFB)qfb0#u8-V$y ie6aۦn:<[7_J`lҢhoǰ-EpC)zVlkFO5Y/pݑFxLB1#& (~*qB4ܜ\ L 1>< }6@F,aRm2"MZ'Ɗ˔s kG #٘'TDB%iHQTQk,iMb2 N!5\kF13u`V4C02R$X$f0&ܦFGRF5Ė?{m@/EV񲀞'88;'/ zeK=+);Og$fzV=Z^4=lbUfY  S,rKܹAba0Ru-m,w_\^ 7W̸b<8$I?e*#t$@k%PsR>*J=2{FzzN=[zzzzX%Ů@=Z +а) `ِNg ྄C1{rz|2ν7.FKAq(5~U|$$&I1|*!(GAVl1JGTIJ{]r_wjE>R bP͆To: ^ASp4hկ#.w$ݯ Υ6{./}r͏gtןfDN#;sO?,wS3zʬ3j5Fz»l)>îrqY7E c $ gG9*es?ū&H7o2?b _ :%-.#мsN%e+:݄+G x#|r򊭊|&{yAt^c VږBeʅ`VˈEhAn5;iXI$d"aȢ VW@HO a$ 2hYeoĿ߬b¶F8Ŧ͂z s}^تUuyiڟ/be&fH? gw;o5֥EvUo][dWUR^ GUv*Sb9QE+Adѐ@A0xM}Ѥ6St> -V֪ZuauفQhW0{i!C .@<:iD9Mj8;h;pFي76;[%}x 5k~)X[imFpjFpX.baa:֑aX:LҒiQښ*qI nzh ,JޒV,} W`LYl0duMx.@q*DHN VTr=m2J'hP̎/Ď/?XʱQ}0{H˞P׮hoBx` 82%dWdfN ǠڐД{`;ޣp5RJ(v}uhCғKfB(|, Macfpaʪظ)hnE!{BU /5T\4>H)1b|VT*.D'%֪VA+F̥lj,W^YGf򊜗ssv"d_S]L@dw.|~'O6Ul>ta !+E`7lh Vf+@HX.b4_Siwgf@|ȈC!3dka¨<"u G(R{G.$%dh R8Ʊ(,'1/ޅIn?YR6M=h@S YR+'mcd+H@VLa5 Kb'@qD{iv`0]Pz/frrFn5Z=CYUxE60+~"ݬfOq3~?޶} v%w7?~~ym,G?]Ը'RHh`,8g75xU~_]۠K~vok%yLF{qq[p5JZA÷ܠ$o %&pv򘜚*/V0!졃UJ8aw8IZ(fT{裏bA&DsH=szYm"ʉMa-p iFcRh߶W Sǐ)gGrH7ϛCրɞ lt(ǒ&Y ;Y<c{h;VBFH!QoZ:CtnVj&<9U7pxzSp9mUWß?܇s GTChOeT'"ʔOڪ7G0QTQ:ٚE B8+ !G!Az|'rr̺ád>3Ytmݐ?EwOݍܹsfS H$:Ep^7,;I$̸L1@8ϕƣyGVt;0S-w͔fz|0!TVrQ"7l1`Q cOɁM;qGlI!|@"65NqL&Sɨrq3- 8sNCZ81IɤOQ:#)(Q8r3>rۋ}v-hn/v5z~cV_$hK2K+)b[wYɒ (@,N W S`i tT0`duG~'v`9cϝX84Ԗ}ADtLM=$#/x%6Ǐ.Y"5C}Djm |r B740:W ][^іk5gM j֕Sn`.ҏދ^,r|Ըzfk͇xwsk^_ yH?ٯf~o'o̟/'ڝ풵ѻ0mYI)7WukU4]tJt {xA;VںwuO:=g9B»B f4ejȒxCgîJIo˛9_ݜ_xfNbjHq!lܞ\G,4eHVG+.YZl{]Yf=v6QI5~8kZe+%6jIg:zg](Xw .$Uo]w݌M8dzj|ؔ9}uwm]Tqt[~#y9զx ay0׎T q\#,hiuk¦֩s>ꤒ£lTIJ`7`;9hꉦȊ&F8XSVȵ85Ad eΛcd0=B:qTSX4rկL_MsT|Tӷ-D4]5ۘÔ!1=3l)ef[G /`QzlD*G;ʂb?4 fgwRK#]-3襋: CR2sx/,>XMѸ\j= ($t`&` C0Ng-dzTWB%X8P!MAJT;?:q~/vOtbXr~G5Fl6`i5&bNV}J}m*Ś[ $V̛'r0:*j 9k<<.u60@%tO ~D~[2%}ՄgKhB{Tx1 !$Pl(E}4%0rཪ%~(W$3ꢕ:#NP6<pgY1SOc-s0pPC #P.6;Ŗl"y~FE$[#(,831PcO~ A 9PCx YhD3S)%!:2+#(yw'KIBeSXW JEE3Emhܗ<ɩڂSQȾP7"`0n;s?F:7Ԛͱ9v"ڜDuC͙v" PӜuXkq̆-m:!wO("#ͦꗧVJYI,DՅ5EKe i̙X/ҥ\|͜Y8CB#_І%d]LS0ذ¤>!0B-xȒЊX KPyH MH%_s q$f蔶l9*]IiT RJ$ zB*=qP0@`2/&e4<"L 4!,Mf d2S`4 yˋ?-V#@| sM YzPJ*@zHfz k7봆@ e JOl7}Z -yh uy:֨NGˆtzp*GBc*y rVSqv0)eZmqZM۾6 pֶg#fs?pNK6MK<ÂSJ`vZ %qjDGW%ɪE=xVl}l7KJc=md.h 7gq F¦B 7tTZa`AjCMfQ[Thvk Zyٗ[хSMyh_mO8a>0ւ/ĨLlsS -. *p"[פR"V^ VZ.JHH^QSq9ϩJ@'',=?c(2xRz(- 'IO4gBy̼d-zui׵KdA([8FN'Kd󜀻xBYI ! g B^= Q$aT63,N KfO"[/Cx)$wmOF9%yN>֘JsUބz/OV#g^VLVȧ\KҤ֊Y%6|Jlu!AM[l]N լyμ|?y~|<ɾy@!KKrz'PqZ"E?=nIBe1;()À{1ؙF=*,vS| 7HţĬʺ,a-XWFFD9 ,ީ9wG/P,5@sT аi[4bmE2&!UE IPP-$SXQ@0Xh\H޿^A@Jm rվwokS1~uC|$ ‡HLоfk6D Q@ B0*y׷X 7w{Bw ,{ߠŪ-Яdթ!Ht NCGtHtv-ܒoױ 9A NN }R>x wTbZw+3VzF9ORCxRe1O.5*ǖ9:\B (!~؜>1;/H)s2%TLpghW(j Ө=Yja>o1(λ24?@WUS!i[4$CY I^nuDMJko"Ch7JkFVmM+%诔Z$ʦ^L1OU4yJU^β+M$8e.ws\=!+ {|otO] |]|D5Lr᜶ l4M>x W糘)L' /m~jSq砩&g5,O`2KG|doO}Ur޹@;"֋qUgYm"p@~͠N#%w30}6L=^Sw<% Y{#iguc_D5FLrpjpqz7 KJX*c_>;6޹xR"H{`ay 4+bNsl*e{s^+@߼;D#'<o FNTfqkr,$F)K5UV1M1Nk\hF"=^zq7nuia͡&D4fJ|V`eJ-hbg6R3wvFaOXRG BC\)w: {"hB{wġQ%͡ o?!IsKeL~͐uG̈˨e"g,EV*mE.u3 怩΅DIˬ}(-)w_pfp hΑ\,c5gHGW7mФ& 8.cۿ N4o[񞑷"!ɰ"w3WhYۛCͥ O{P@}7.)m55L* 1wzESFo\2X|L8Ǵ䩱 d$͡Ր<}ZٌŸg܂p7/{dj8߁wyv,.fbSAy뵛#ϙB.h NDiRwZ)cIfN7"+MJ}|{0_Hk}HS(RT,3A7O#aѭuD>OE7DkqKJҝ)n[Ǭ_:-6W. q#&9">GcfTcSzwqJwOIM J><;vVg+[fB},5{q0J=Q%IQ.dƨkgDFK@}RƦ5ԠcL~<&w2QRWΨ eI zvyk#&6_ w%QKDz#iTWLw qB+)0 t}!/ LZ|j~XprAA8? Nn 1 W+#I;Պǭ!#p0U=}Ӡa޹x#pkXvYM3̣Bnx/_Dи;$;%C8pҁ;# T+\Mp}{}ks's+1&i,)e"ysЫ@gb*jX"r_"֒gywQ96fP,Sɍ3ěKFR35L _ m+fCfp?I#!Βv/M-m&!iEjY 8Ź Is # sTdT|PXf ȏ0-=Ҕ(f,#PʪѢ1nPane -Z*|lR =&lJ t:rTjNJn|4B ې$G x8ǹ-Ub&cU*#7?uw">hu9>b𓥤nkƴ8Q;3lHkrP=c~篋LILKJqy€;x}My /qN.^Hkwrc@fHk] 9ju1<U*ZcX^X;ۗT"z["_l0;ϋa:bˋ8b]؋}'ub v|e@j<˵L(rH@O[Q(1[;E;R$h)/f_,*7l߼8R'm%vԘҶjm/bX䔦uuh[icfĄkw0dϫ:NFK21^'$[+9Łs<{$%O=m9B6Lg+&s,uФ&~Ql͏G/U~G? Qx«Q~A#9UJ&l, ?:JMuS![>5Y'ؼ+X bOoNIB&B~mN\,5'kWZty=śK/Dѯgx|&uoNv;:Q Ve8#jf55$4 Dg%dn $,X.bo,錣rdG#K"#S?l:VNX{n=)zSÕxjӆ>TOkCS\j ZݒA-fkLVdO'&}`%劵?S`X`8 ?w'j8G&j[JF)A5N iy\jSgWӘd1LQ0@D|FMWTPsFvH ut6) gpzݯ/(bt le'p{;/K֐nYHs`Ha@Kӂa13M`C* MY ]R2)I:Id(.ܾMwfi2uqvvrF g9X)0H9B(wC,+nR0k4JtѤK9g!%$=JI%ۙ<8 Xu򆪋pa)=8ƆiVIPozP@@+oy|=ŐcgNP!vGSQ49lmNn댨!h~/oV]cG3r<4}݇7]YEW]] z/}m=j"nu%m'd#~p+W$;(h'ɡv+bi":knZ ,[4ڭ E4JJjq=,[, BD'v:m@/xTo-Pֆ|"#S6D[t9Z:ᾚԠPQP^Pf(ynS˨!/P@9¬yLns3ߘH CjMf3P9#YpH`3H) hڠ|$wU-. '>V|TY&QtF;!c;+2|Dc,egdo1fAĮXc$Ii%iM4\ts#a:>=DJ==B2KFUErBtuqUۥ*1íڽՆf{q0*msT;G[>]=hX˚Art+QY? Q,4#jűC9!a5D{_yZui$ 4!!߹Ȕ@r1ei-!F6`xWޥvF4T!!߹Fɔn35,XEQ+s:9&ex"ق|"\b%6ld;{]?O9ɿbBht;^MkG] 6+p`nHI< 2Fmdnۖ_mɔ-ݖeUX,<*84Эȣ"HCzxQA984踀ԯCNs4pZIU诠I>JC:>t"*o=u(ef>Zv0B3O45MzA+L&fgFGZ}MC>vXHb"V{uYCJQn&Ricc.Z"P# .5Kf)*6ѭ]<m'v׾uv6݁0!AMl *hZReUDdcqp\׉>Ys=eSo&'T:AaD3(BX]4RMcX0:/nK?y ?Sn %9vBI # ĠV(킪 $Hd$V ?-VLH؉F$o 2ȀwzlS% %P0BB+cF}F4X_i2s{蚒ez_XoQLYHIyv 57EqLId!CXfZpa4f jQS8~]U;vqVs'iv85[UG*Mjd!ym lMY,d<`+tFܪs)?Qa!Z"Cҵq ՞t~AB~5ÏMLMgw)q0C%'DXIǴ|CMGR!5ؾei{c8PL_qte0}X" ,P㦻"kպ9B{k{3G9|!8txګJ3n v±bܠ 'q9A꧳i OUc ܝ˦c'zO?c$}ُۉ);Co޼3";<[`@ﭟ4ޞU8Y<"QͣUG2̈v" 4@]3Э_]9l2:34= Ks%FnݻF3EUWXO-͆vnvN~*8% ei!(Pпu^/'w:_?:e qjp`c7@ǓE}Ni KJR ʩLr`h"`Π1s EVZ:%P6>w^YC ;b TA@'e'O."H-s3Q6,şo7CpcYHs?~ h~9/p Fl<_^<x,*'$*-tl@.wiBfؙqFh =0KQlRiz7 =g<8> M3RΡTFD#kwv;\D~|?7fonA.f?~o? Kdxy՛d så?t!k:ǵ/f ?> |t+ \pEf3NpHቔIE$2 3RZIDy ' r0 alz1/d|1ڿѿ&P?u3s3DDՑ Ct +{pnLZ"}|F3ȕ><(_A/\:4ʮ?rX-bs %_S5>!pBoWxŊw>?@{Du涛$tgENszf~fe22F\2Z8 zNJBًQ 8UMYD?+-JN&vu WTʀvPe1_Bg/̄xVl0ݴx@z˼KUZAO_g,m>b:>Iqr< /{QyS澢+isz#fZ%kEH#A k\4)P#t-Nj#ꌵƚRY{zjеq@8h 5)ܤ\R$/ ;c3!|ٙ/+SMy7.?f}zl \R+L AT8XnOZg/Db5!?>Of]7"*%Ç{ 6g*Bu$5H zjRL5O`md^˞IU*IQUML(ۯ˃500IԮNus,`}f0[VO~ߦ~jCU'DM}cx6˪}5&ͻ^}. t ؒě,|eDo~Q:::$7`-B#~A&tއ|;>oM|@( >6E_m[ w[ڳ)a ]Q;W803e>+mt7{DC ]~< *NZhޢU-^uhڇ(BKb/:Ώ0nd WGR: Y>¶dm~S֘-$SUflg)/y;];NNCx&O#4Fx~`F}lܞMBŷžN_d¿j#gR R6|*LD},ƭjVYamTgppʪIeENupxC 5<N,pxJ^k@6'^.l'Vp Ɲd.?,7v2vɸ$fw>o3hha%V! 0va N"VX--H "k +Ĉihv k]2"pPRt͓Mg 3:3dlA;yn4{yZlGwC;Bfoh ij||Wم6n~x _9nH")9tfR0l'E&~=:GKV.됐\Dd76%5l-2F֜ s]) zLyQzh2tnpb8'?|7WEoV;C'±K)4vғ$<(X_]Q0o Y8,fHO,EufETZսOpZe7@ mTUQUXp[[CJP#*hjrm9,_AqMI]4-z5_n} ]Yh4q,@=^UfDCݬ*!hp~cH)Z IcHI-[-"RҭHuC& _pG:ߌg #8.`br'ɶ~W*g"gfAfz80x4-s464yI4yk g?w464EI4EbZ4=, `d zh{ 4$[n]X`ZMt3 #֌c9ɇ`{W_&ض 212%u745.3O"ؓ$fp0wULe{Ɓ*3̈zNg2D[RqӖvWXpHPDZ&@Ab,`/|gVq"ߞ{n+u2ֹt8~Mdw~х' 3xkYbxBKzL? $EBY80q *_:4U08Oǖ:+d@ܝadXF*6Cbk&A Zn|M2JC%>} :E"TRI(& ]bOSP3%JE9wFh4 *܋پT(Gaȓm| _츋jԿ 3ӮFO®Dmy&%3&q\8.>V!%ٝ$D#Bd) d0<=47xkj'CLTlsҩ29&H+`0!\"%rLpDD13Qw-\zk^ѸG:0-VtF-S6Lm I & >XfO=e%++OE*Ub#vL/8Lqc4he򉫐 ,VQX7 Kub 1LXNm0E$ad@̣_%S XXK 1JHH 08Z"#Q )VO>G #R&h0 8R J<6<1awOnH_Qͧm*sW[{qŹrI(>f&5$5$JHRR_7Fw%1%* Oɾ,\| $OLLYrp:K]y2Y qY0I'})QF6S.! R%,a5cRMLX>d%h}r5~Xq@s7܎r%mo9f`lImJ3 ։|b-(~3$zu]RxK6/.P+r)CIm%jy*mDo&[D6Z^joYeImmLt-62/~ԀٍfWJXs7WE *YG2C|"yvvV؜;rC;uB@xvg ~-ndWBܴT8f??D~b|z즣7? |h%4$stЛ =\+(~TOZj7r[448N,E \HJ%9/pR<͸"c0 m5k8PzPN;ZRf?j˫L0R~YmhzV53ĢJ}_iRzj "%~GAOju}}N9y@@?14D'B87',.b^r뾴:.w+1_XѱV" Ms裪dgz.l^ )۬8ѡAe>ƻs_M6|쏟1;u9chw  x@: 3)=/%9 9N{ҹ5z]2smꕘ,ӓǛNV5;F-"e߹5-){vopv**k1pN˳W-A,mpdڋFz l=ztYkKŞ û;%ῳv "v$t>:ba\ߡNӗ$)J`﾿$)@Ԭp5`2@e@^7m!\xKL-tb LЍŒ #rzv=Ws$jM~T-cou} g81΂dF^,?? BB@}nm{Q/)T(3 +FmVOr NnA9gCI3qtHԼckm9'»Aq‚:).DI/oR8)Z?=ܷ9muXtc֫c[L}TE1.ˬ~uZslMJϣj9P[iꃾ> +H@zw;!ou%(oE^XCM$CSsQ(N} MMW/գoV- }^jH*jB_ v͘|&W,g0º+7 _ݤ9}sUo7*{1LJN4B{dǻwh bnOJN9@2Pňw 2 SK?g]qk8@^cT@ҽwoVPzXlsϔH2%# @1+_!$э ̿D)@q֔UjP tGLmʖ_"-^KeU!sCِv{rg`XYrAS&7qz{䞎{s@L0TR wRC1dcE&gP8}zD\ s@HI:_7 awIsCy#—2M(8d =VHDLg?bA(ЯyW#)@9C*uLǛBCMpK9r0#m"ud{K`YO* 1 ǝs;۳d)5^HU::#>!lntPH}wFumJY1d̓RY0G0ߘriIʥ>>:q VY z]9 &!?FDOI9 ` Fޱ<:T2@Fx2q%y0,*G}먹Of( ou1773֯w?VijS;aY20 U?E"]xTJD\K-R1D7x/;qAt@[JżZm#۽T#?2QRWHmVL\TaV{^2KtaO'6u,a.M,|j|;Hs(LxKJtf|yDŽ%@d.#{#UNƘe 3IȻ4Ԉ HO&f/֫rs]o (\r.xrqO ?IPͥɰ8y35 'Na%3xG0*nta Dv*oNNp)XRzܨtݼZ ,"\9wpE8s+÷^$*NpI Q(^rFK ΁`uLشPX ~H$ Na.jǖ=d0Hq!#*C᎟.IATOcRGO #?0qUmt_el$wJu\  /OqEK6` s}F7q vo/` ڛO++$1b-jvvlm)Uf w:Dcj"],f(^v/-/Yߏ[X2%% 9p3iJv|&o ^VPsNhB3x/y_OѿƔ&:;MB'(;åp*5I]od{2M[bvӧ\#Ԇm\ݞ&+ >%U\)w)UGvTB8ipZ(PI؍{ۮۣ`C3 $Cڅj0vv}]( y |k*ŵ o}3__OC|k7o}c&ZБ) Rik\^lFXގL5VdG{ W]/H/U\.^hWd2硘98"\!Zk29JBlz^LYs7?Rք33 t쌄mlh.ea02>|x dN jeD)!qa,F I8t Q8\B>NCXb܊fqYy9MO٧;1K應qyic w@N`*1)ab$2$ʇUPT 9CH:G@AT9A!?;F.CHJ $7'I.^zrA3A-*kW[J\ yKI' 8hF$l%r9s)@EL)au&VYoO v/L, Xr@M9,a!~5&" _n˨(lrQ\jLO}JM-pyW|s=xU_~:CǗk5Shy8O%ą1bUs:Ϯ&JBhsb &bOĢ9}1ewbAƳ`*9M*[@ ak'(}nyr*3"]oW ^>D]d{}DJp'B}l~C858Sf9̯ {Pύ2S-|qi=abZb8q;]Vo,\CJ/:xYu҉˒ '.uZkL;?yGX jiZ%9, cz]Na+"*戴XfeVt͇*_/^1!-uJ4wO[ Yɏ7Z}G\;8eBvmzZ}QdcE+|;0A O:;q:#IHV;nkGvJJ ;OP0p|djv~Z,$%]A3i|>Oku|4mOmM&P*gJ-2ABݹY>cxD%f]J6%kJt1%$u'1كvI5+_ ʊ( g1$%/Pf^ȣon0"ޏ컙 y&ᓵ8UcaY*@Y8I)q[KD!Lq{so}=wD~%j:ގCc IH($? (p-`4ǢBM@ltNƭ(\$ťoC! hX0d*yNL43a`$l3yp!|XO("Y + rrOJass RpzCs<#tۆ48aNFKĶ"fp9bh]B$ojnlLRZzP j6߅7a8?K)qBCJ0АKK+pvY2`(pwWag#p*<Ǫ67/?+MS62{~+[fg~b(VTccIb$V(ӡ:Ԃ15O#[׼񗈿5ZTw/+gD"FM9g9@i1a^ȩqƑ?/a ʨt0ckl(1VF;#u.נ):X58?pFb9|ԄBK9)Br":PD]CQj Sɾ* eKPy@r֜=VmFH[dj 3^Ҳ#Đ;;{P^+!7 Q5D׽ 4:7SHьnwH߲ẨHwzubdp[Ĭ;ŔG껹]ZNL*ʔ4 R(Q#Sb 2fbG%Ƌ3G[Z<|˫f ,NPhe&si\0v9`oЛǃik X `Z,gKzd:[̲y= '|')~NMg\$/?<'7w/VaWHrl̐C~jzx+btR3[~oc`Ni@炤.t(1Ud4 1^ nd29̀c8SwqӗcFasޅUbzc 漸A$EM3Ljv8K2e +&KkfU۝lp9M axa*U~vƠI~~ޢ)s3GB@2 D%xzq('Fkޠh!; `CPZؤ^-5 )-´76`#fVB3r:BKa!U\r^`( rm HNL]YkO^ȃÖ}~W FȺ ]/;gi |&HMQ pR*!PL8Hl &kEe& e&(9P.Uf2#Q2U.-L ~#3{!Eod:8}H&u造xCpA0kHV*a0(qhJ9[4"E"3\8i0z|\+Q|%P¶›Q.uBՐI~j6Ø e4/@  yS x3X^0hQZ ljm{SP d$a 9a899bS1!һ'#6-ZFЖN VbI\ԣ΢ʺjz65''1{OtF Ja}7E{֢aI`OgQ3 #4IQ $g5{Hw £y:47 &z fǒAx-9om(Ҵ>hCxpsT\è<Ԣsvfb)G[SgG۬\9q7FLЁXO~H=K"H!(=T<^42 `,)R"ȰPޮpLM~Koy88[W#xuh6.y˙HW؍ EKV_g h*s+׌j>=!z3} Nd%6?O8D`Όn EWg 'f_l=Sqmע3ev3¹jYb+;7p20zF>Աv%QgHWLX[֒;PXK+UW@p>Nzz长T#Kڔ#  _(mU3FP!ycTW#ET$E\#އ:q_?X.+R뫁S&PJaivJdȤ *W"*ib0KA1s4*ʱW rNy&U~ׯ ֘21cc@ {5P=xr0=iIц֨mq=,hNJO{n~dnZ6F]ZD2NBe:hδ@!4N32i( oA,($ R@˄ݒ:= @dp}$1b#.$6j7,W2$ ?yl,1v;YLgw3㏣d^6t, <W%UA7u=WeĿip(8Fns1׃If> Ѯ\5@RƐ]Zh*4ԩQ閌gK4 POjM ػmdWTzR*?lLR2YHNt[Idf+lQdxb &5F5Z? f)J83n=M1D4Aq1KD@/dtɰN1 ℜ|Tlzu]&С.Wzj٩;=#;IS!]h}[1j2ͻTymJ}v2u>:r)}GãZ-FomoN F~OW_ߚs^Gw'|3NV -x*j+y~[Ip iQhf܁C־Hwu(Kp>i%(3Xc#Hs j*o5BRrN&.D0F%Jesڣg 7GWScGg#h>`!,1v3búfΠZUbu V h(VWk6cu0psc5αu(F:b wmՃdzhh\J8O5kkU!mjVkUɊ?CO(p`R195cL*@R݂s|wn8)5oT".2Њ% RKnr(SKd !p 1Al(.{?9n!, 7m vE; GU1;7^MSa1/ /^ݷnWj26]_Y8]mz@;n^ #kG|>cz: DdD*lY~Lɶ|x-<(}O,&>HY:F!NOOU:xfPn6ճuR<LC (;`/?q”i S.}2K%zxr$̀lm?L:xeVfȋ*[d>#W`6mK h[Nm,^r`ϑ}u;y>7 6ccڨƏ·ҹe:N}^"/M'-u T}9Hr4Rd"@y8S*v 46.!gI? $f 7%=o=S/~W А_zwÝC|$`4F"A8a$T<ӫk]dKf 4V,,Q@G" bdifgKW]:XICD1<@1JObsϨ,J`BciK| SKglC9܁9xQ boGsXšjnsH$=$cdH:-G}ӗ7b{`yqr'$iK{z3n=o BOH ۈM汚xEA>4UeogAՇ5>>֕`ͮ> ;Oѝ`:aü!MN+4a;̈'k̰g {iq?bCԟB}T"zf5,xji3CB0ֻAer֐kLƪ:4 ` 6RE<2h/B``݆_m^K^:L:b;urT@t DXdqzi'qzk뭝y@#,Md3 5R$IZ!@)fD'Z8GA8J>y=xcr懁oG׃ edɚ6F{OG4N;?ۼ%n!/ %rMѩNŭu*nSq[v*ʦNԈERaf2RahPL%Hcc%qcqN%QbJigS};߳hbD9ÐI+wi]W[fiaT"`Jg$1*#i!!p"6ΒgҕtKeSvtl` -鴑fnn 87$C e=.xu&Zzu9_5?N 0/V' |b,N$3KEB+O2~zu*@uvaN:1@5IbYJaqkgɧR+z۸wqz>ZqܪHoЛ׏K V#^ۯ~Otq34"JR_㫡qҌO&$ ͷ0gV7\\gioxhl9\J<sEKi>. h3AR.S});^9l!")jT[5x&!(< a$qppGP7Y?6OE* #)37d[87CܙEBL7N ܒNg]Y]W8gf9v-stjBiO%B*6{nR[/|T+MxYA@x}Dg Y}vlF+ɤe~jz@˫[ղ ::^5|;]z4x7yϊϏH *|JW O_6+ _Q\\^rny%|Lv(tIEhP+6Gm+Q5uʭ\JQmHQ)*;c6/+C =)G֧ں $qzW_MƘlNh-y`f*IEywx{a])o xQKД2uD+#9KF+" Y'o^=Ll6X@Ɓ4j_*#e+$ERVTj3iRezLV$@5y3>yՀ;# +xӕ[fҘaJV?6ViGV܉F1C ЮT46pN:SC!ysc8ژ+|@F48IDVPG$:\DG8IӲR.KhZm[ Q 9Y0f-+K,vF΋ . F %Lu~< #"%Tt_0ca֟ڧ7@AvE~#{"D3VT#j=U ._e##BAfwa$..~&nvp#k'i'( )̀HL)228X ┊Ut!@ 7d%˵HR 1c$bL&qJI©V4MYFDJ% mb3 1O@2!-:Fu:ˌJ.yuSw,aGsey+Qg&Qg2(l;F ̶AΤR'8ɧdKlܖ$fFt:ݣ+J x)]G! 8 gp\<[eOmo3КIt ! 5>me A)j5ɨ1 ]03 Id|8J"T"(wJH^#X lI6nmnN=򑓌to+qBH$26- lz#"jM0q 멺]P/.6`>r4K .ٺՂ au}k-Z۲eNS,0"4)I9&4 cKxFPPD13 %%};^u@G#PY2q55kqfIXiJ#0HXbЌd1%q5a*po =fG'bV>\b'xgŎ#HVuv^wY}ht<٩^_acE>Z8tPZj௾law6N_f>157~;G0WD#I*P}xBUv_mfSNPn*U5! NL&]rDLߐ^2dKx5.k+G)([or^t]cy;zjxjnZ/?]1f<gQ|OHK~?jBJ'Cy`x:/-Qx9/,U ]Ys0 `½o Sp26sv c1ʁ$i1JDR!4Q@ s%NDTTlVP^zX C\&n:-$x Ev2K2ƉW#}\(z533g'//$ǓIh0l"+n/trZR 'xMһ@B 5t}uQb`0+{ /N^pYȺ|>{.(1JrmkȇwBCCϟ~1; & ݻhqٸDX_/(J\?i*"9t:un6#"a9=[ReIWnͺ\$@f0S M25'qLcq%8 B,`< hC$i&i)0}%̬cT8Hg [~Qb!T1$2cO u1700I6vON"!+*Opj|L|l>rGc#+U^ءF[nƳ^Dl¾j{3qJOYi-ﷹ[ fƛY-Tֶ໱Y,Q>ljaa}3Fr^/ 2"݋? XMd1m+k8eo&>Io8֖%?D$D\C}ff *++++}*#K6h(B P'*5LY X=k`12!iF`X񀈳SQ-q1tx0|85Dҭ葀e3Q)$b+ihifjPinJx0=AhcdL: :e;:\\ݚ') f*DD-@9M !*c Ҧ%@"juZ2=Qnr6؞9=`z&DaB]]o`;TqgrI: &]sS{AZqQaLldmI00 v3M6i/Tc-;W~?1]%rMgPlgY7Z 0˞ݕ>':4+e=y4Eo:ց$fb"9CB1[s0ь $L=S\c>~CYb>0ƄE9"}thՃ-Ol9%m9d=|vZ9rx7&ҩp=UC>z4Ш&\b>q= !-&fF5{np0'`xâu3abMz)8A1c;j sL ptPmEp ucgyն+EӨjZ $9>y{db&zby& L+?l"K{BS-C-=I,2#5=^*t)P1esFZE,cʋ#EA/Xȳ,ˁ1lXƚLRhl')LRlRnJ]Lі(E;oz;I>̈́;T9l15|A0+# H"CYLba Ds~!這 1)B ȄB:x:*"A%`&i(Zi'Tk`w"Ǧa5UE]HړcWycXSoLD26FFl+XB)u,M%V|"ZK ;i'/1\4MPmge a˰skp`+CxeEiB2%q%T X'BÂB], D"[7'wp 3Ÿ(1"o@Y&bؤx,VEiA\SQ5IE%2dާ}MHWB?qRo/"SX-&PS@ w4B IحOK@L~MTqPkk:@t圥vh&n,/gZTmh~R&g_.| ?3?\&ܽ 5WFϯ.S0EkCLax:E"WN[N|Ek[7^ty ȯy ܉kwaYSWGy-hkZ7]99u+AtJhbFN5̺3kݺ@hxvSn2ȁNMۈSXp[1֭ ]\4DDhiZ[>{go {b'By`ϳz5C=5{mJ/xdOd'Wˏ>W,֙p=:پ_ܮDgqugѪI |+{=Y8x;Iѿ;IѿzoX.v\X& XHox4Ӌ9/ >e vQ(ބUT1heUԁ,{z.mOȱO-{GXR'tFF S<a&x眦QJe; ITvurFH 2L=,G\E|(!s"L>irY.5֔imV05`2VcDŽG2w;U뫓3 rn܇ʃQjM{M307_=&+JHo yj,VxM7dW "mf@翽: ~ggjozgo3Xi8ˍ~h[)Gixz J >{Conĸ6LSv ץ>|~W"c݉ssaޭ06(7+Gk0װu:U=> [ 㚱OF&6HgSXbq|;;{R;@-yqVi^עfcW9:X('/ nR,Q T%95yU +U,QûrZK2KڳG%SUS%Jd`vUB]%_1ҭEߑZsH-|Gj~H-OfمU{? vo.z*ӥ^,̒Z5.f C.g C.YdllbzBF^Ks3njՙFL KoPA\n ۋKľ¸n eI)_7Y3ذΎ|1>MWg33fze:!w] )ՁE?}]$.Pѫ0όmK/)c~y|\_Im-A[1G  Qi/Y0lctK?(ZqV-!.hg>q'&T1NE-f)Nү_-g\jZlэ}ل'bui&–?_/vz΋u#Wf.<>OK6Z])`JL1 3y~uܜ:/Ps/jN݄yIjQn;x/t¼c5b> cPcǦ;twaϵ>fE2Rw߿Y1_P"7˸ܧD1s " #E %]IHj))ۉ؟ʹH::*wS`e~84\J$؋#,b bLdHq$e(/.$1X[J):(*+dz~ RCy^!Dj &y$^h 3ڢ,S Ow|uUҠvo?T)ǖzĨ-}vitkI,TV !:;jFY6'klŞ~GeO(>w%CLkm\_s!|!*5WljTl%R^xkoǓ3E5D.aVS­`,J(J=fmx҄on}ڲ/aw}P Te,0:7 ZgAxS( &Xs<(&% B#9,00bKKBQG q4Hy,cjn Ȳ0R=KVf ZEB`8e.hNbrZ! ^A'Fjp-3I:=Z$vh6#pz`!F$CJ!.H*%P<s`QQM`HV(eqw{7xQwkVutf枞^*aw-grM~\gH=)xz0 "rjU"'7'`SƓ!j;xV /ΆB(JkJI60s39HPF\tO!}̑p7ASMJs‚6& ? B%x %OBQ֛%GkCMUA=)$|XC8:rq)7hAF0&)-wsd765I;Dr_`;$!#UDB䐎:6r] 0)^ց ޓiknj@@羧D&H@־=ʳ]@/ ha_Ƞ,X ½\RN83Fi%R )/]T >BzKFJ 8 XCuS$!Rfn4W'b=xbeH $#BX.9LGpa+Fe/bt6q/IVxA -K'[0e,^ BS1u078Nq0[8ʁkEn yW#:TJ( a)ͥp՗= azfA(HT_ڃƴTuP%OV߈$=L` l7 H֪&^OET +"\Kw`ϭ\OUHK >j-3Za|١) S$;]p*Abz$PtH{bUz).T}#i@*iH萸!V4h3?$.1 ?`JAbx+Nlow{8ɕϋs=c ŭRG^5M!|/(oLWs A7 {Xt*WaǭQ+qCzW6Gr#YQ01RAiktIy |(&؂TC 9̆8A:#Afe)%>ha2<tN]={7| b7?mԋ,=y&gُ!Lu08d7wۻ4z:\qg ttf$;9| ͖(W8FM~(ǡDT|dUQEu}_W&N}l1,$BGTn 9C:#jA"doly>ar7N9sr4+̬f7lr72# @Py?Y[gPdL(4ĉ2PLba8D g. ǵ;ڊ? wNbJ9lf wxN$ZY҄gj%h ǒ h0b f#C{F%$RŴ gkʼnbm-=w{W5h \@>0Di8~'K~Tׄ!/'+hmHӛL&0EB ߌ?Wy,XS}T),nOٟ() ^W 9z _F]gsLng^mhnOS5_odmCi4ZR3ZѣU@)|cV+t IY84 v]Zv@ 'IHo!HÕCƒ !87bDL2o(a6֔8t|LU [$m/cc54v>/WB@׶g/?]A'eX&ns0R9ۛOEZJSww0mp3V/?L3=]/hw^~pw>B٬?l-^)x$*ΪׇӬ'ac:&Z{[SP f [yc<.0Fskcb+6?}p.nc|@%Y܈Nf)c-a뙿·4g])G3ưu}fUv{IFlStH?>f7M^ކ}ޛ5|[੺qo )\5P Ɲ qctiSpk l7*KyFm_t֝d Cm3u j׌-vab}:Qţ8̬w*Զ Ւ{L[ ͏'6AT ]op9"\z-)E9>"H* A1ciCP+&`(*cpLؒyfuAXUNE`N8A+d;bR}TVKcT%ݍ b_hAGk-kIrz6M@%DP2ig_g4`&) ^BhF8&*UAgktDYk>Be{ʪ'пg?*t*EҪs" eev;d& n /r>oye̛.ctY͛ q&MjXس@!{meCpkek槒'gYv1ߔn/ohY6+1;N OY({ig6#q{QRC V)9zC돝eK{Ċ6Gp,Y#ębk|EmFazZq$EƸ5hH~$3J}yS΃b}a-stxݎE*k}XE; \E{3Qi93j=jH/VG {b 7b|3}r-t w|γLMnܶ/o1'vfWX4c6HYl߄IdI"E.}f3e1+Բ(%숗i\y 5!c`̋$!߸Tb`{H &O/gΉ(NoD鹻nƷqc`Au}~rьO}{:4FI{ N}O<Ⴔy)AS.*"e@s)g0Rs8!NI kz=\9}6PJUgYANK$R]޻*l zMZ_Ϋx[;;Nڥ:%;lq'LZ>qM{qy4\Tnq 78\A01|k.Zҩ|x;w*׆1ITbZ|қQlaf bX_:P3r>]xJ(TjUr91 ־jB[\ +gK< ]ka7RT˅Ě8MDwhPa$}TM?l}Jx)[fUi> [l$qS`ԏtr̸Ve'fEc+R|:LixoJuo9~jg8b9{'hOx o&8%O|O0G .f+ Yiq.Ew!J:r$Ն\ \|?aԲ\m%XV8n- yix^&ma6LFTlqptapyW)&k~Iâzmkw ΁tF9R:lKuJyK2pQP_iQSs2fOJ+6ep#q_!Wc\'W [룙9#Vv8{w #m.W,Y*Q?E9<"2!rs>pT^Lq>L38p"AfA%Xk2 `2 & 1E \Tyg:!Dpg &fD,Kq 2CE,L)τ<ՠ>(5I1ti5p]jm&٦TgCN<SZgf)OT3($*9lL"*"t.sӡ+CˆpdR%D 02wtI@kZ8`8-h½*rM$>~ѝu2/`M'" *L9a p)@A`g'i j?ZFdj.iPF+PTs` "<($+#ցQmyC]WaTmC2_Xÿ{}\ϡU}y Ks,VeM^/L me`5t]yBW~\{DWs]Z`\*\톷ǫj? |,Nd{ 0 IqZ^N,`>y^,)>[Oꏐ+!}^dXOBM}@.Bl5k y}ov!f*[^p#`s3L&`I^'K%p3,`&Hn\!Į^Upy-m}T.~[,<޵kxerK2/^'{5[D66E V]n{Ml MjMJKa})inWf'L}_.|>/aUY_^"ۏY$;Y&2``Gzu(ѳ0;6OZ_Vݬ zF؁{|BZp008G U􌪃,cߵNQ[l;<5gA Wt]W_ugTFiGv=} wOwB"̷>{4brn>zJPא[N\ΟKm]9DJc%卧JlIUF։(PQwg{WNsj_k$t/i2$nFPhDP(<,5D"G6G?0tt7Af&񲭴i{7VQݍ^}vMTo7{cgu9+[?+nq{[9EhϘ;~fy1t@$FϮ37wA^U;gqcHpD{/)_w#N ùr].9 ogiV.u:OGEݚtsx>~uy.?SZ۷.Y0?R|a*U9{x30v٠^&-y^b:Y`>.Tm,7?g%.;1%v(Ye"vy9cUZpbх beowI4 bg`V`JrntpW O [J-_1Bӄh]*|uD+J)t:̊\:3H5Qi\EM&~ݟ*8YlpXdeo8ׁ4&7kU,e܋w{ mڞ  {k?(ԙ$܂`rk4jwy{n_vR߽M U$BVza笒ï窤Ɩ5^WX'onNv~ԟBw`F@Pm2c&#pAHIW.S!$(`?"w5,]Vaal:gW&~&~t'gKvjC^b~]q%|LVdx4_f,\ݾ;hVoQ f`jצHTJ9V-6a%X2PL@2sùwjê]ν=Z~C^A py6wƻYm\t1O83d[2,ɒY! hZ` 3=EW!P8w9t۰uc]ҵ/;^>Do.fݬ]̬ȷf3+b[$7k3Eo.fݬ]Neqv1fb]yv1fbł7k vvޕwihnXcisQh:E2oz]qj۪E華[6f`2#rvyJ2'8q*X3ˏFV lp~2 Q慨y/_cm5V.'`wjWACDXaص:d$oQ_|?虈f̄B-2q 5Z6e+qr!TYU=퇍ܪ!|Xx9FSEJcY@Z7R ڹpJ֜B\et=QnkAh쑫*=rA+aDE%P"*;H{*rP!8*JUw?b8;@x^0V%tØt燕GRgPl#vBڮc_aG5eQ6k& ] _getXlR!I0|-EW DA;6"aUdRQ ??FGMdHR)EyՋGUU(+sh+Y#F=\!bDدi %uP8Y#.ogcP6@dI~wIjAquI_DV] DK 7p_^2 HGk@Wc~f0?^T^}>ztA^D/OxP1jֹRgQ3i N 8;m>j-BZ)4e傪7jG q+ | kZt.޸#'g1SC eRT,#0$~12wʦ!+a6{PL tULWV{Zy;Պ\z^3XdM8=D̫Ԥ^ի}[I0]>DV< hQ'Yl%nT>h2Ar1n|HȍL1N{H} Aqv. pj_n|HȍL)~M2 A%q죖v,[#pjf\'ZYyQҘ>vcR>h7W ku[wY]Seݜ'ZUQXz8>h7W ku[wtT"ҚvshUCBn\DdϴX#&iCAqv.8V>yU qu)8#&횓]nTeuZ-nlyQ'jUh1HAýwsŠ8QKu/*rnfZ>1+|k}s%hB|Zc>f+#Y!c3Ȕ1+%Z|JI%cV0'rrz`?FdoZJe IZW4_GiFՈ| Gu%kdc\xGϕ@kZ >[L5,>r+H#bO|Bk>#8j-ҝ\sAb#Dh}Z0<}8u|<ȣJ` G#>r+A =8Vc>f+A :1Ö9}Q]ADs1] v6BŨc~>9)‘ s2M>z-yYfd}/f[)dٻ6$Wx5>qL&,a`Ib 0Y&AE ,ˬ<2)K;ppL[$j#$|=n%*%$q/:nUg0bJeX~tL}jN/ʴe@2E7~yQ@Mw|M&y"qH-g:gWd\Em-X9ifpPjPDpc;MKP'(\[RJbMV*ڷ_`0a65<'dhJpZ.`SLLZEk~ Iy!̝w !O <8M¸"!L*ak0^ߧR–J8ߕF*8Q K*8Gg 6˓G3:uQ꣤"x"]!]bZ?ƄIQ j4:vQ)ΥK) RA5swGECEg?[+Aah 6Ah"PDv-sNٴnvBV= 5qZǁsXӯluYlG@*yLB}9ZFiM'!WK -?Iw;Jǜrhs z9;]`\>%>{`TWÓ6ADu#mRoXo[gƓk= _>\dlR( Aݺ:ÑCa{{3o`L\l1S6\̣4≙Nf3x1p3I@ c[;?[M \Skgw8ڋXۥP]5~W.߿eRupoA-ƭQ(pIyD4/U#Z:a.5^KE~ G;ZJ~+ѥ\c,T]&$^ V!`,lS)H4sFA2N,@}3dIйs>/m~iD'!MJ]w d α{Qxr/Vė_wCB]v毄lW];mږ;Bk0 /aJr2a$|VG&iwM5q_?7`A8 t]aIȔڡm7bś$2'krbͭNQ*16 È/5L I{铫0Gf&8kx0 #FKUNӵt]dZ+>&/LW%gt=={uAs6n XdK2= 3k'&+3n-̘.Kp3Ҹg$P@ȐR7wg.KQmw\הg"9G  s]z7QI!yd6CH^I盒Qoإ: "$q9!S愥^SF?ZQHR q dtLMS2LY$%jgY 00I7w_PU^up5|)|wtD|s_ؾo$v(yz[)Pv65o#t6[w6 H䩀a8=L7\Veji(;\^Y2j͢&=V\ :Sv s7` V(PT" *4$\tBȞx;9|^# EOCX bN1!RI bBd#'HXcXK-a4ZIWuj}'wp%SC4rJTG$E*ID4vJ08(%U0Ӳ1j} ׸g~A&D!lqגT)%Wj-)3debШ_;.<ԍ\|"%SD}cr1h":XEJ<MhS E4K4{vJj\ N;Vn; zb{} m!!߹.n(S@r!V KQ!#0[E s? ZTli%7JFQ>2W'Kr\{"OwVhC_$Nk}F76 /3 0{or2ST+6%*0IOJ#> W)3U[/߼{ ƌ݋U/3"Ee10)]0$euLޡbjʹ7) !xQ|t2@ݛӟW۩٢@B\GKsDnѠIH8={䁱wc-ؠ^X3kz`H ܒ*p:=T@٘/ΉmHI-ut]aCڗ+|D$NqJ"FM:$a5"P%ߏec(l YQATOmVޚbl%lo倪j ,hgѕ=T ]iKc]V6u>z43U/F˥zZ$%^SKzJXWS-O߷iÃ̶ϯ۟pDϲl-Zj Qr`6JFkUWccHNz~ǧk_ \iLzLz=mRO70]/xdSCߌ4ZMT9&C֔{nSfF-P~:<ϓŭH:[ I :k7V\ D܂m{ $Wwłᴗ']=nQ;:JiWҏ5c[ȵ*>N:iaKELR-UpvWd8GBEǑ/ ,+<͐.w\/f][m2Gv6B}Nj co+kYR{WԈ}ΘEuapQP'II𖪓̈;o9<ջ/;X̧s/m W'd =uXv!wBRsy j@d0sZ!^ i)RauTjĭ$EEQ]͋d.OcI7$Brmͭz-q1BJ]OElPq;I&5kd4Z 4OA%F^t̽e.ep݈>3 9ZA;)wyPřdz/v#z7ݬma&a={)~ӝ dAE=I}@%\aRcfZhd"#8hCqKc)C^APO[kH|H#ܓA0w: k X1a9s{}DH$ySj$F" `+Ewa"z!X"~?^ϵqYY`k}2wYM;K_|ViPSq ]l{ ̝'3wy2su3W=BP䝳o% bPZ+"'SʝlTHԼ-uґ]\su b GICu2,/̃IDM {(^RPց66Y`{zؒ,dgI%YJ˅fR}`D0&)A{ς.Y2"ZkaV6ˎoPٚ1oQ6mF5lE9D1o\ $`"1Eh5-qldY1l*1#Lhc&N(R1T .ouY=K_pJ2 .tqu]קUi.)ĺXh蓳iyvgtVyx]yu:|;Zwxz{cCc4pE_>A7B#jߎVYhؑ:`6uDt:'v;z7?Lg9iႛͅrůpy764_G!E$"=<>߿CϙX>g*:{/w5pJ3P$܄wpfC7s&s $\Y@៣(>`gzx`|+>j̨}}?/x=ʬdKhBaFq/Vӌmrgfvld&4X rMo"zj \W# ?p@@efl@'o1p>tEo\t(UٶskJUdEݮQ@YQF>l7 U2cO3 1&KXlr;ٸRU/qs%>3q"iC0H8(eN&Q*cdtcƣ:|^|#ŪWǹBzoq;+-dlEBnbsA-:r@~QCWI\_棺gp4Fj?Q–]u#z`8s'AnѰfEW|N:0$ t5&S=Զm!{pF5 '|W6{ā#3 Y٬t,ygDWAALKR{I+;me#"6XnC5i)휂M=>ȝ0{gy qF2= JMnʒ0͕'R aQJwhahl]6"ιs9gbPIg mK ݝ-O"ր]0Sb1?w>gg,{rW>~ u7Wh\ c-,P9ݪwA5~nr0V.~+ocaM5''/kTVJ]x(ۨLRCH͎;<e&lZZOϾ7OcZ*;/0յn5jzP#ljrT59B):o`Rh`u0Q^9k^wRpwPv2Q2n[;~G]{ CW-ǔa>e\ƀ+DEc6MՍ`[jj:v(V;BeTB }DR \/=&|h qyDHf,(7=>A`f_i-*_W1i]~֦u]?W͏ t}{IOԷddcVW8b059^uP?^G8 uΈS7ǻ])1@1> T]2(eQ r%}?W)`UzV p~Ftq6HQnާI>/F8U;eUg7(숱{aǃaԌKB%潔qv{nꓠUbf|=P|q~{#E%&'Sߧ_soV{걓Ϻ˨} :gg_wU5SߟE#~x({ftG5Zć{F㫴dsFavdPΗԯ筬DEBZHD<ۋ9=41Ia -j'qwA -e;FI0 NtсӮ:KԘ*;FO M6.`t>c_ٯřI1_g4iC;4i]`yoI3^{9FbfSJR&wY+0v?;5+pȯ2W4j;{Yܶemdx-NaIm2*KVKne&H2Z_hCYgar xv| ZjVȼҡ hK9fUt C m.⣪RӝFS$DNA;{MKK.9FǨ\ࠄᒜR1mb4{0V)6hV VSz϶1>\24 ə8qcse62YI}A gӏ~=CPFU"j\^3qf|6:4zEm3[4M*,C`9gSH6ighLZ@l)}Tif'Z0RbZ巬#ZsR0dE[Ĉ2$Z5#o<BkuF,"",U9±]GڳJo&iŠ| =AZUZD %Hkwɂ΁\B*oFF'kZfLfeZP6Yۓx`5Т*NeiV֪?˒l\iXϴϴD;RZj J?q*R~U[]#AX sGuۦ1}KI&q:w7g#HP*:\aLNT6 rp5,z6guQܴ]Yb;VჁӕBɉ[JƠia^bE)]ŝۋ-KGxF_yH1D c:2Ԅ D<@.ddy1db 6Kf+KFdh׆U %_V:9*ɇ>,?9m3P/thN\_FcF_(Oy+$y' qG F,1ѤJb #! YL>@!c! Qx1l481<<\P2XSFT-EA dV#!aHTy`930ƄT(3l(쓂MSk͝]B]OFe|9xF8MB[YBI&- F!d,h$cD%mZp{ _h˞4_i'3|ΧnܿmQw1c>|{OIKPNф4ܬ}35|DbUl1WiE~rjΡ$UʖS-ЎKI,3f\q&=)[kۻ eRU]oe2Sɶe۴bmUlBxc |h aq)*c Җ _(ΊM)iIٔv:'! Lj}WztGiH*gVt|sy}W,P#doc~&O^QZTL}Ƕ75xF1S)F2:Me&M>cJ;t?6ued}nbl8N(rhx3Lp4߷oMOa#qN Ͱm㵘e];d| W(}fΩ_֎ G1OK6̬IhCA }XLnчCgwV^9nOs%%/e X[ʬ05nƖj0#x'B!,;;PJpa![K%HC 5((b#+LA HĕY ʰR":zQq~fҚ:' O=Z(J̊38w>0 b9[ ^.rGE![$iq¨ŢYm:V0B7W3[&uۮV]*~  y_\Pf%0-eZ p#a D24s HiW699aH0G^Q[p6MʕB=NtBXue@ʖ)m`ʹ;i(Fĝh_P ש av |0 r#m)p!̉A{ݳD?NA=~z^e>ڝ3߯B0c ;|̷_1޷ IJу]Y  ~z@2bJE 5sV_LZ7M9u)A`4WGZU997s Ɍz9; N/+_kFq6a>%twcGZ)Vqū("&{ڏxԼKK.`FpٌB#x*؞~ D #ތ&!(4IŁ1ڈ֗`\*ME31I'37`g.+R}r8jUFISw N4 i]a}q{cs,b<΍JߨU,JqcKkmz 帓,zxny\A´OohCÈTZl~U[o{U'lxCYhj!Ԫr¶ ;Q5d[J*wا 7K6-W~_Q\cgw'9" +T{~2d +&RѸGu%c z99> C̤K9;§' ,7# ěWߍݸ#{A,rRNAիҵq&K\9[DP*VJy.-N_odڶۮNH/S7v7^t{A/LnŜML7"w0p? FNBxy0y߆vǾ:n*ULV1?' 7n9w|km:`Be$ mH q69N(454(GXx,5!&$UHCFKSND:k\qаq8;ZnƸ1z;C ۿ<<݌ۿ}\/CP-64L{ phӕ'x kB|h?W&dJOZԻC&\RPqac{<§5dp]@nj?d)P`m-=2T"HeGJҋ~S*X.KWd$uɇz5}_UjOVX菍7 џGu|Y:m[`7Ԙ]|lvNLk kvkNgg x$V*A@xDrNK3TM%Ƭ RFs_hFKSe;j7.aLK %gcrkqXETUuyS'Fo?AÆkFjgF׿oa9լ2+v?ɿ9WZlfTfgr-TϘ'w+D?6qud𹟽kKNMI gs{V-[K}n56~Co@K  kG1!V%Xg6,Z,߫eꟑD H]L3 [AwYz@wI@ąч 4Cjɕ2AgN]ʘCbkRZr?gvfYy߯Joe_R 3D%*D"2l6!%C1Qrkec(bHY Ƀ4h9 }7dnT.S D{3gQ49plsOb>|7 L+HPƸBD2"&0# P,X¹ Ɏ.A`hR^I0_C/N{q/;+.TEy*ES\iN$o׎b6JAT K_'$hA]k1 oώeN w^ͺ~\W7<ք Psq]0Z zV2B"<̮էʮOv]L)&(3-s]DJ"k^Mc]4Q#{f_eHiL,2+!L*Tk#~fJTAՆ1zp>eZqҘ/۽nK$JEy\l0({$U=GժꮞTJ⏤H"4<+$ BIU<+G%bFt\$Z Qcm&軟p&ofInʛ^!іg2 yr_/q8H}cIab^.l t}3}ʘ6*SANRmsn~޸?Y?[4\ZC,SFn[* bX'm.ƴфo햼XY4Źwԥ;vKA >v;R9o햼XYto2Z]H›]N_l}/旗gϛXkެ/ƈ!mRR<(lV}ᏛY'q>[8F4)ϯʫeiJzb7?|•]__pU3i [ CR~YdM#W$ F-4:@i^ /a $ғǂcI!u P:F:|(##K؄D 9Iٰ '],ģ“/>7QAJ]HsM^FM4OXs+ղiPc|yo/?*gMqy]?u}eg/se&aŸLdaS'+rlܼ٨E^}uTڤV;?rȧSzM+U]t9,S\F}*UUiCwɦJBi %(wx2/(bh6%p[c|ne#4sB<]QDڭPS7F!ۨ^dӽ,{f ΄$hDNȗl nZFt~/.K~1#FiQ$:m]/ǵhռ(wvBA妮}NVS>/7x0r@;SR?57̥_0 &.JUb e()r$3FP*ʼH!3fytj JVNSZ{wAmO$?93ceZ=6-o{pVHHصᬀYAHeB36&ϐ]¦t$ڱC9+wu+=# ʅTMm1<#KD.roqwh Ԗ҈W%{iRn'd{ 7x_ݤ)4eأV~z6nzF !ZI<]ߺ;HEV$sȩP&-;|y.*evEaJ}t3y^'e3tOB+,[|DU/Dԡ &thG+]YFHD[\fOߘ(+(bTb\iWIy8Yh^hX9_M4B+/X.@H,{GLl o7?3]dy9\$>2V /=V9b:E2K^+{6F}# fh:Z /W5) OLX[JIȲf f$"s"MFvґ5:( 5ii2:fǵ*)գI}Z*q?M^N-w`.- } *а%˩g-+%nv2;yÎ~v~7yf'(\ſEUfV:d^ΣEYA.Kǰj~9yXvSr](ueMpQ$?|? af%# V^3"# ~FFRgBsR#(fƦY]h)%Bx@QU0\Z#vg-I!. !L׃M0E[ vJf_Aup6coZ}bS(޷N:p bZ BviwKwj8Jv4 wϋ~>QU!3fbe5'5ӗ}W Ӄp'LtzA#o3%OGMC] ulwŧDa< ʌ^vqe_G{+klA1:XhS^JI1.@[pKFx;hZAu;\Hݦ!eFpLYCdǙ1f =5iڎ!1BnXh !B1N@c05^tsi>pQpPeFy ;fcc`:#.Bv47ark3*l ^1gXhx1Nӡ n\I𰵤akjC^*-5ZhAe&)Y ԂxtDBee=|Q(Q6EfdrvYUx_h1[\"-᤺ f;hyL͒X/Čx^i.Ş^T&`G3u9YGo{H *+d:dL$ ,gXUWLQJZ]żNRfPR;G"⍊[!yΉlIR I$P04ϲIӭMYFng_o᫫?U?I"h7[+=rzC.aZ}rFRK KB1=7HJfFu@\n[z%${+9/QwXӟ|,}Ň?]?ZKkS^QJ6J={0JÖ.}ۺ{q Ze8$ѱg.o7OͼAgGo;@$K贃x?yP4DOFꁤ]^qCs)( I'H^Vd}IR>n/+<'yXTW0N 9͆zʼn˚Tq^!dWxg4X n p&"Y!jbf}yZ2wUfnEzǟ} --eQ.~', ;f}%7d6chٰGoۄVfĒ1q! KK $۷>le 75S<a Ru ߊp"%Z*8J"tXaR(CRGR%2!=,IBTKS`>wX Fҫr%"r po&QP`bIC*P\1,*b {,%eK9ݲ)eB[TțB7%#\w201Uʚ:G0ƻYS@br$EjތhOoIEo8'S(# Ca׏\k<}٬Yzcpfo0nag/j^w0k٬c|1Oh1Mzm1uw|vw{0;%}I嗛;0_Qo[d8?o*]z͒$퇓۽ Jl6bd|YL2dt < 1j6;OL"a$}tLDD7O^ S2[%37MPS䂙ji7J3rˉ܃+0-̀6[pQ>{jqWXyOUMF/ <3[V1\o.3u3 J}`6>_an%(:-ż9nh#uF~|VppJБ#]3/y[(לU쎼qIʼn=fK/f 8<~>zS|ptB1±悃T>6LY${O_<]J IR@¬9vO~?";..yst59OWK^-! )S|U(KSuObo_-3^Ce z~j\j[r3M'ݾTcXN: 0 Zm;ND p4\D"RH8i D> |g )K{ =1,l\1 ao7Eh%vs"=~T~vK2UC6%jE*\|m1o8bԒ0)Hߎ:x], d 2Pj\UɜS힫3'gRj`D  Zal63n]HmN:kWF$wX8[j퓞 bwդ\* ؼz֢4ev64e6O(MS,C儋ZΦ+Cul!a)$1gVZ.IXb@<@8BSL8E4ݵ+J-Ln,$LVQ$85C!:a`\hii „PlM`ƓR<0B Tf%bˡ+IIکJL׺Ok(Je3`NmU8ks]}NVak,Hߑmfwtry*Í9(j) Y⼹y팯幾UW: y^+WDZ682gtn} i>^wǚ{{ /] ~A^b-C,T u@GG*])n ׭t_Yf GQY5^ j[]VV@sԩT\mY𳻠 ֵ`w\6 IJ @5xWǗRs Itj,I'N׼%JhԢ#%Rlȶ)J;R!UXTm?PT4myzij++4Vu޽B"3$-  mLFrܪ8v,C6r55g/g _ MRtyY-t#c UvfljP@7p*+[FQ~^ALPW(8=DRyb?а/LPj4 ۨ&2\q̩ UL $# V Pچq̰eR2=ݧ؀R&x2ʘ?67f;{9%do)rRO&*yǍu)w{R%b]cN)z!h)ʯ0织>%,yנ0Arj/㮴ۢ^ai|`cmUΥ`r#I<&,>~͆,'dk(EOKޕU#PmAFΔ \&y?܉z _BCG 4Rƅ R(Vx waF[$1NP).(mZVHMRcż>RY +RNwHuB8soZ5NH]+RccQ-k*ԯXo S}4weୗN'Ih> \$hKϾ͖qn.=Tz<ȑ?8Jpð9yd7؟?>|w(篳- ex0fv2|y9GKH#q>PRڧ< Frpٝzl4Fa 4e!W pˠh;?ՊwL/*d3zO q$1VWPe.Si1#b-rVܝHx ݴe_9~G!Q%%H\+HӲUeX)\m"hӲi޷|2S{XR@ 쉣q)\7$u]w+?muw5{N̜4w̯6ɋ%onN\37UŅU\t833̫z=AyzSC /'ȃPҿs]`Fr/S "[TErMJ"wRҝUbFJc؉Ƒ^Y&ps8b=]($I#IHRآzE{Z50C<]q'JMA!eBk0`,ϤClܚz&`o'CT{h:#ZQ,cF`J$#Ou8`]¡ cBD/ӟOY:;]q:8}s}]?X0$dBc;R\1"pd&$17%SLqK !EiS!\Ð?L9@ {J陝GD:Pl+0b**P)#:&" PH5!HDJD0uQIY2i3advѷ'1{řl݋y6 Oz}.HV)4e0M_s vC}Lpze:/),7[1)~g]#LL6#B8bn{t́'Rҏ\L$#TkZ.HEyռm<$yS[Ì])=Hz𚡃FY%$^OnoK͏Y/):t`lThso5zotf_=sRus%‡f: E,ovW 6s)_9 .̩{y@B^Ky8xѪDBW#eKlQK'(%'.`Kc ji[.SE\oㄯ@SunF'>UuD7aIob(MDFqI.xque 5;?zYBͪʘick8{C.jhC-:%b(ã{F)xgYU.ݖ*2꽈*)qZO|͕(cD;z1XF[iwmJFK'ܴ9+{oØڅ}L(΄Oծ(Nxi1K%IHjH&s 0fz]y!@Nkc 9M ~v/xNC^Wr{o[+4O4-gA,M, {D@q,Qud0TH,qd}Mfp #H"B8_hH/ 3 P CDy'k4ܲ. cDeFWo%Y8̠f"GB 06oY{`ڡTN+srRo;_gE[SVd ~yoi!.$4 T ԔEꅍbm]x8ah)b 0օNw\u= Q5(oRz$8 fqDDF EÚ"_kB+x+ YaQ&&0z$-A}DhhsVԛ͍o7;IqU @wsP%+XK̑8PxHGc;a0Bx¥Ȁs)̫GY>%]T|}9z@m{*\f.[.Wm 0O?x;G`N7?LJ G¼5^=`%=?ϿwfŚ»1$ݮlQz#1ـxZ>1+eN[/?sgJCtBu}.`  [ωQ,+ۘ l "+ x {,"G!})wm;v쑭"@ {w]L/D%rt#} :n1{|MD_aw#g 9s=Y1٠kZ[:ӹ+SjS1AY9F,T :Z>v޵%6hAVc;du{1溽ŧڋHpK2SF8z:r̕ z<`J&!kޘw#\]9(scVFej$3]_0ҽk2=)0PN)n$~O[UR<}CH5C5mesxz8=%9W8K0G9we$Օz\stnP)Quhd,HSb!ے7%?yf.eS G(VT_KL(H:j:DPajͿp %X-NaZsi9zSUS0,jAA(ӨTx`š) ,\hgZ͕d)C2V5 "O%k{qk@ߝ]tcsӟSTUE |Z> k HN&,&9r̒TGKd2[幋^X# L-LJ:;(9/x\m `zb )z|e+D9^ﺫq kjh27^9mS+wSp>>_a(GC2H8BK*.Z^02v!:X-t#BRQ5oQ>Q!|P4$#S@+t>ATcR_-}!a 8"44QPI‹#PR9a1R a@P*4%zVЗӁ9=g&}} P͠.W=p]o@Lm|$ovߝ'7Ϣ_H:DD^:t>z:/p|ׅwq/cIyOkl{d Чx jHjzFB5\`v;_ECD4UL[Vꒂ[u1 ` "!Bt/\/ɸ>^:cͧ^p<Ƌޕn;?6a&II,w7-gQg\L;{0{Y,77#VIn^"oem'z"vvH1L7?B ˹[M? ַ o]t`[GDln7;I*_7_ xc0oݠ>||)-Iyv5AvqcvQA"@Sŝ݀M!g!oDAA痪;`SX4{F/q Fpus-&VW=R6SsHF];T\ fٕMXSRÄFUN&(|MX\?jUiY!q;`BcԅO$87PD[ Vq \Uyc45<&C<sĈ [5fz|:yeMz6wv,qpO:SNfG$YՕz\: TbYMX)iq©\ 9e삟$Օz\stnP)QBd/9Lnjɂ4U,CVdHZ|)ct0cP\%jeJɆLe.7ƖnKBי `m8 8\m`\^RNQ%OKZB˂41HT<9`Φڌ 8IyܠT߅ݍԅ5l$>`J̴F*\!Ԝ7v/f2Ydٌ17a7Q*Jnrq_jMRKJ@tʼnrމgppg $flqxL?E= 1~&Q|aBa*B10%aC0䅢߱<Эp͓Ƅ`SNUdЄV& =߷ L%\L˒ݿl+Y úL/*ABEP'$i ?=$TZJ_BEig2$?UN›P:b߬zyaW &DFCϏY!ٹy<s0]^1xdlemf}0`3NEV߹ &2E8>7^mid!c7ǯxQC"h*HpT&Ly'4#|lH!Bԑ Iq9rGQÄ{3ӥG 'YxպoޡݛV+$P {"b9"myU)#g>u՝pZ+C %H?{8nᗃݼ_CYdL)An9@!%_d[(K䉲ĢbU'$hMw0`R'Nߌ'CwQv^C3y GO;3Zow|禽0GH =xc`I At_G8(GDɏ&M2nqޭlf^NV1DI9EJ-棷e_i[ fzWfOx7NeꭰBIjwr<czbV;LgUfj6X:z3m]eI1Zģ`}TYuoRsҽmbU5\,5^:"1 #)2Z,'a\/oWuO03l,gąmg@/ T|^|St`R%Ӧlf$?i/BP.a VM\ilCP6=0%&>"NE7g3sCe!T:z])AmvR|i<ǯ>>^ٙ-ʭ,ȶ6O' ß%X2QpNʁg%!̩QkwD-ՔB, !kJR8r!ƈcTrcHa>]RwKeY>e9@DNqw+q'ϔ 3[H]E t ; 47KHfub"[X%N5vc@‹źG6KQ.C*eК *w'@@h F32XN##? x" F4G&`Wif!bi?9(,ea| S mFej~>XIZM_ofkG22k dش3+BEEBC)>C^Q8 u%6hcz\#| <mѐs$JʧVdmNL8eS̞]RXb(+{]LսûaVW3$ \MX 8lP2eU1O3 "C,ZŠfXFkb}^8b#5"#lDVHAa"3:./ DJNckɆ 5MΛj{N\%b k'B%9!irߣp4 QAhjWi)KtC,12Yݸc5M5%K(uCp[,8>#-:rH]Xb/ V\bI}n%N,nNIb!B.ׇ{8q>ʭz*WR V*Qg^'Do^F_B7kK yBo^uY | wע[ fM` ?~ow{su,{]v6B=h:u^dmScFۦtWOɉB_rf{u.u,xk=obuޙ*0[<KFh, p (:OiZF$8D(#9^R]xe|$zJ7p|W8#їKEr"E$)h$RXG2JBB`@r#.8D +aq0Dq"`'QG`L~?iLjlI? |2=acf*0|;0\9ƀ ssNA,x+I,, )bq$1(8iROUFa}ɪl#RY-̈́bgBePnZ*i`'8ǏaxJ~~'_o>"Qv})Шj1>h2#gi6=ft} %i22K1_i9SrsHz?UVi>'# ;ڗؗL۽}xIƁ D٪Z, '573qw .pecaa5lVgyg*Mw6`݆LIQvÍf"|dhܻ;ʄhb2IJ@}VBzD)[M8SU ϚuCށSNB.Aމ. z7; 0~p'E+Z(@'ƞHwP: D~o[!3mSthZX#۸IaBHhf4l>B39o^ޥTqx7_TS&× aᛶ*ev!nKz_7s1w:KI; ә2Mu*h-҂z'Qr\"uLIR(5n#)ڠ9<;۟HiqXU 8z=c~;TjfQJ|3ZIyKs\8ũf96ͮ x6Yc$E]tejaKjFw/6ܩ|IqNkUh/2$-zMS'^qYWH*zʼnN^!a2$^ NWcZ[끧^Po8qujX9ͻpLeh~XC Sr;t liBѭ0&w!AP6³Ṵ4feOhEFԍ9̌t _ x, xoojnfH Ok5t\O W6&x:>4ˇ~3?{fNVӗۑ n3!mz;Ml_,g0'&vGf& i*.7`v#ffMHaSFɆ}/oE`wSiwì/EffOUIf 52?%F}g *"s@d06 B ʠ@sU=Qerzf$Fej)nj@cDB'@]~1heg T}swVJ$0;^sūQ~n3?k!/msS ۔P Cgƨ.Pakg=+z3\ n99FZC2rBgV62>t3_~ >X,_[7 IrjnjSEͻWw) Yv\9ROM$ Tq(#6@,dEX3T!E\Dqc7 1FB$5 ̆W0TFH38R*d Z (P@G:C(9OdF6/;v ޔOг U@1]/"=U 0:#nr AYFHBsxThS"LTژ<\i@3[X)ita蔂 {9Ne+ `&il`f/u36̊\\* IrF/^ҵ]к;Is-G9n}$f„1V9Vh!0S !1 :0BEC)`<% R,bxDLUK<:5FZO 6pudVx0~hrްlM؟ S8J$ߙz;a\c>7޵\B%H6r&@Oo6;KB"L I &E5ob7BC,}έNm- 8Q)pQWhI 7;J?s 㖇s#@( lwHIqa'HIR-{_ުqxa~k-xrq^.O^"izi{ ˀ_zюfǎz3[mɗ:~kd/]ņ\VП{߆QM%m8u8sEΖ+L_*߆}F{9A-#A1Ǽ﷒$P2 gz+²jaYmve P]s:eV@5Uţ9Sje!w/HWŠq*t6LRVNX+c8,@`F)wq":sa&# 8q)9#%ƃ9)(e%s9ØP`7kU8.":gC8 3K`/~U+ LVVkv>h^[yMwKW"u8Ayfy}25j>LL)/N7ш2q)2lǪy^ߴѓ63e_F#Jo( $KQ36ѠW2[s&7JtTq1Y1"JR0ߙ67DE dHƈ9@PƘگ5aePi~|2>xaԛO97䷛d'/͏T;I,gov C<Y)kVE'DGI7@JA"d4v&: 8b1j<8H%L!n4ѻc6fyKo]6Pɷfa- nx@3agDj /!(eg0`{ύhgi31js <2S8qsŸ]0d6dܡ#,F@{%\Fs80u`4LkPm5؀h)*xN?l{Ȼ)j@Jl|f.^Ezbhe6(Ft>Br$4g> ''Hs{TF#) 9N mRH]Ҽ _ G(M=IJmH$w{Y.kt.QiQʘajᘴoZ q^GmY$~isG| 5[ʡ$[oC(K-B7MBBHN`UQ[2O,! -Χn{[I;GXm.M)f $ ^*H}16}u鼵Q-fdqWQʌ*JTj (L`QR$5Y/I&Ֆ bILcCXp+TH*.$ brCx6? >arqvбvtsVhbx;Y-)yHrt5oaI*wT 3&>MM\RIx!/Nľ>h :&z%/2kB"v2y!e~x.ၒUE9ǧi[tc.jq߯+|IǧBDdyfK''+us>CRηWw.R8oͧCbȻ?L_я=b'6i|~\:tBFC:~ X4GV"(L4&ZX|<^ yO+*JiƣF/ѺgWKz|JELn~U ]Zo~u}[zo눮$)O{G re [ߧ]@Sryp]<;;7nuKbI5: ,1pĹ AeY1BQ!YH{p-*B ,2;XJɊs!3j2JF&ϷO*ϏJ^ùP4Ł2;.`F4Vey vYnKS&LZ< YsTgBeyPFTzXn&M쌦&hDmn1|C3$ kCMbBT:UNrVRr6 LjZ| F|e}c}e`RL*P)ϩQȌPel̸aiNfVz$ {M RՀ4kJk|)y94_JA޷ LJ6[cpXG%y }MU*GF=I<"i*RmO]0|d6d )^ \-s5Ǽ`1Bʱ}|fb!/ +օeܾT'/,ΆB02qٻqFڲ^ѕW6$s8,hb8hoB 2(wxhtF+.4' PmZ( $ Rn -Y|OѼ㽨pG 3 ^WWס8;Fyg]ta)7~0ig|.Dn&W3.2}vf 0f#z@<xYSK AP_W\yR AȐq.2E-'4WƄ1IeX (u! 9<jYefZQN 땤Ʊ+S:2>'cH^s{1'!(y!=B6 TKK 0ύY$:vI,#Ta3-pDpp1a'Vf G !.5cB(u UQE A^i)HL KNi-&&΃5R2]Ml:86Y._{ins$wmaKBUౝ%ݚT[w I%_dj:Jb9F4|}=VqZ?:DDF|ѯmz~0I].=Oo&m{nƃMsB01J_O~ 87Ŵ^jUNq\P oiHIa0e“Ajh,;qoYzH7 ~td6k>;{#K1O.I0x*8ghW͎ynRcZ</}G$PK?:~x <>?(eN1gfmEڞq ]d0lAH|J8K nJ"TIwN2g*JGlT̉-Ѻ }.֭D/8e0떺I[h*0eɺajaLuҽc7VhTbKӚuKY-*SFB20%q-'M@d!Uy*]L[r@ jHQneV&V%V59Sf0Y[ݴ0,/ )q LIXZJj'm%m/z}hWDo["RHm .B*P bk*h#NMG8eN2K+qRtᬮ#H;1bMܣMj MMYN\BlN`'+Xn\"F ~-pN 4Ȑ~(M%yB#~ S\((*Z-WXV-ueByS \q"#)tOnHaX׽K DYoߵ7KSA _[Y EFȫ@$!Ңp.\!\b:.UD![Œuk 1&ʆDk&5ul5!cG @Jw 1Gi3*g Wm^,J-U$IvҾcD٪@fX/pe+s R&9Ȇ140ˏ%n~tf1vv} à>,+hE3eҝo v`*ZbJEfZFDHʹ)Wa%N<@=5#PTpʠd*\,OKdPu'{ʿ "+cI8elzKXVيE> 35[BUʢsRqeQ8+0] )a+ԵXW6RR)U)b&SIZN8b0=ǚ0>jyѪ_mx* ;zxcYZD6Gp9{s]Odn*(%Je%Ŋv^`bیx4LExRIW&{rDI ®ӱzG+0ic:͝ I@KړEaҞ-RMȓ$Q$[X_ L *}28)]b[I1okhۚx_GJJj1wU|-I\hy%Ly0|vkI5Suy-dEw0-y%)&.g^+YaJtޏMI}b%ΉCl-(+8n:ADE+:&\ØZIōG~0mZIN`P^ J6D"ZW̰lbJk_zJ+JDBt&',,uZӗ  *l M8LF{XTU8̛1W+%:B7GzҠ/I~3o\^ *nlٗZK$Д tɋ* :^*B #2&s+~VmZ|ռ5X4v!"HUr=kUK D?pm9w>:X&CиvXG^Qdլ]_|}v?twkj_?EOy۵0{c(9 (&:gdL$V!.:k\*X8K" -r1M./U.QJ4$,Ȓ)4!0 |-KNj161:C%cFbP9qiQi*u7:i6q1;,Mk]]pVW*3jάId 5Y& -g&5ۉRvjz#:wBkOPۂ (5+8H N\*FTf8&,Wh 4.$+xF'Wx2ȅ. r.'IST‚!QU2B,). QyBHx"kov(}{EֈN :( ]2@_Tv*T=@p2ɣWaGˑYON@J\bEh35<\5F)f>ƃ8_ͧFԎ^6WL5"79jfb$^l5axA?fO=wπ+qVVI46V) (6fӏpQdz[ w} K~x4,,HEK)4XAKl}wYDUY+Xd&? 8|ָ"U(+jI;b.k#vu# nކ hms. ԏk |_yEɥ,Ź.c$OCZ0q4²ёnpJmrmyt~BPc+ucܛШQX5{݉x>5k^Xٟ!pA'7L^6ZОlp9 d•X`YNXDD ,{oXڮHRkoLBArF&s;liZKu}~28~g|n[n[n[n;'X^wpJ۲%ːtT + 9BINS6&'- ϱ%i>܆tVav;{.QKVM~!{0saלm?;k{nXک_fS.sITgص$;9?@&?Y *;Řx\-يDRR5b\>o>8р9!%ԽŔQ S;!rt v| >[$von4azcI0&jӂ9 \V{9^)L^s?x4~(;M6iFshz [֟N_8(ϻ^z azϓ_LO~o8}q{}zvq.nmE˭QhM7* 7o(^|z} '|~׃6hv A<N%G&'?>~tDVfA׊4?bU<[2&.j݈U)zP1n/{w~{z凷G9F oWwn߾?g]|<]pi;ԥ+Xg_v;YxXD_<, y`ěAc ]CzNK`~<lk?!뛫jjfCw1m3=p~ywya&7&jb)L ^ߚu=z^Y?wg}k[9^x|bU-ӾZ󝣰@h$]mu=u;&|cd&o/plg%VNx7f?^j9^;A /w VU*C0rOhma\O ̑AS;pϡ?O߽ӝԵσcGux]-xp_0`!Yp~ ٘"/gFgzïW]ً%ҳiox~2Aarѩzkߏn`dWM>{kҟ~_XDDi#/-,_ W 1&~=ljR.dymk~%?I԰U\bl@ܜ n4 (J ej_>M]w616Fz>.|`~w=ިs)e>7u5uk ɈӧO{}eS!dIcGtqh]P3Lʄi=ȓ) teRt[5J M{Z(ObhǞS88u 1FcSː8~t^ު]W QmD؊0ZO=}mnB 4 nx]2Eouh>`L[!D͋BqWH"FٻVn#WXzٸ*CƩQNˎ'*`aL]Ή!) ɡj.bK0}t6d6)Ԗ)WSUX7w<"2'}@KC|JRae`u 5<@Ӹs`9P\q Vdng`b cAl cb$cJ"p/: ̟?5 wo(s)큲%3)wg=F Z[)6Ew 5D0pݵ,[lDarܒ@ L _\Yu;. (dAd<` s}ahlJz98G;' П=٫ τ+sL10|E)<`c쩇Ӕu z̐![fIB{I%=h3탴 ѕxHal +@k;OÎ2cEkSUW=^ }1C˿DK+CRШdi0U*}w!/ JRw$hHi ʐL@Cz3NY?lN!qYevg~R>w"bHIcCq5Q Z%rKuR(J7.o˰X}6_S|X㷌G7e)D0y=VqRN~[ T\k-~I'o7I '/3v+s`̐/$VT豌 6?kHr5j<:_=᪕C,on]չ=St(qyG]Ra2dZ3fTӘ ].\L.Ls>\;eqvRF)ZNǿ-QnoL'_K݋xj&䓍ydˆpU'b"q<}(>ߞ~ĝgNow;-[WrG,sVJa< %4r?>8;-{d6{GCIv!91oUp $@H o7m{ICk/v o:lz;'Pfh[wJD@)9ޒ6w`DޜA;e!'_Ŕ@a, h=zmN &Qu"b`4Au<&Kt4GQi^Qt5 MZ~!-2AbyO敽֜T4F|p9}W=P;H?Q)ɥϧH!n~kw"EUV M$Wb €"R&(K40~(tsVe4Ϧ޸|nEOh۔wpS7\M%B9k `ySxTs2ͅY*R+!NG3ۘ-tsK@p:]{SH{fІu7+ 8aT`;|@sI& 䙠dx x3ٞglx7/Is5:=A8𾊙Nq4} 0fQϖk0#y\7È}F4evAI;)UmAq7_3AհqNNRP@(gƇ3҄ՙGyeJi"JE{njё]$aT*Nx_f} w}i"rvɚЮۙ^y[S>Dl~NġK-zI="%Iϧw85Ru1^wQ.H&pf"-^V%NY(mP7L¢rqbgcm±1#B*4 ꉣDL!Vvܾ5rb~۲G Rc/Stڨa:N8;+c6sV"3gZ .h!aG 1}N =[jV?^l⢳>^B-{zeOxm dWrAktFu~kޡz=4)7l6'J;Znq{8;O>-NOQǍ C?`O?|he/#Pc˾e7EngMlȃ|`/-ړQm:O1h5~c3Ȝ!Ą 3TЖ)Wi@ڕW~\NVȜM#,aR Vn&Nh$z_[̈. U2J|oG7@W)2귵E76AZhM C ;_9MSR5-L R?Ɣ*N~oE]իUit nD$ZɍLJ+K%!&+LK A}X8"$M,$G? ݮ2qNBlѹVv2I |o[|Rz3RIy~~Z?E'>E O1WdHa=(}!EWۧm gУb\# `(JA̵\Ƃ;K]sso E\ɝIZ'j @- 0ڸrӲ!WI|Grs݈yE{pOHY6 =U%zD5ZE,POdO{5S/fVH&ao;PL \tVn2w8}aZn>^CHO4y~6j;ʽFշبrjۗ:ܶl}nʂp/F2}{\Zi%oj em{]sMJmQŠW䷤/'weqHzI~ٱg01=+[aHrw.o0%U,JICU_D0`!%Fz6K,z?x0mxypٗnۖ.ZZŀ5KgwTk~w[]"3gwX\3٢Wmegm O;waiD2).E]<4 $u(jI-Z uK[C{-_)P )h= Jb'U٣ifEK"B>] 4`v`+p iłRc2fԔd#T+\895S2sF?cJ"!&[hkAU?:`+i^ "{~w*|ax?`jW?JM߉2W ܌{1:aٶ4s +RyBBeBg#?F2f~LobjJۂ{t>|ۗۮ`{cXVKB,;Ѯ{ٗx|3cWl@E[#RNf j:7,ꋞUbg/tθ-ك`Z7zmas[ʾV*nKKlyzں%1OYIF>,Oa a1wHZ`TU`Eyaȥ&,}#e,QQTN":m Gk'بz;Й`|cK%o9` z /R DÈoƪyF #u\*V[c*׼nY3Ega0vZh=}sF;4MX#vb{̸Yƶ^˷lc/..^hrcLiC U@B]Ylc}UIf}̫ԇ*t+j^ ܥQ+*{ 'W5N8W'BCvOQPIgoPV|VN>jWY _EDK׉yo~}-eW'}DK̀#Z@-5 v3ɥdΎߍvETˮ%V̾,[A-;٭}ء}3MNΪ, ؝!*dFKTzyP5#LW/%}u_Zh%y૨uF@I;'~vTvZM,@NIGN1??=&#Ɍ>C|PJW`ܜ^J vag)O9zkgHV~a؅ό I,L6@5LHVf 柌1"F#DP)(C)I1%'xLg'El!pRiC>l,ŨIZF%P AF*?_JBafdufZ9 2+)?JFr BރD?҂g 5W0(˿M! @e] 6d'^ B`Hڴ%"͕t!Po}XzjTrK×jZ?w_>?,F81x~z^rr?=>?~>;cwO$!-[:?5P6^{ ]ְG|qcuWFY@]Uē#oǠZt 5flCFtohC_~{ X\32|~{Ot6\فRt+`ҩnJr+Trʨ4ѫL㕑! gYf]DQ8o d =P=ˢӗT `C >1I"2!ll |PPDMSj}%md6q2!x f "ϱid6Mbkev9;tD`ֽt-,ldlFE!!>OI +ѕQI{&QF a1i-1elX40k5ZςM[>7!i*FlO ;,YmL J'v3w|ԍݑ[gaHb{Kn#?ȎM QtiaUܼ 9`_ ؜kⲲG^\zIpR'pT~c@N~>;3:ksEhUm^wCBshIF$mm^w1'x(FA)WqZv@ CX"t׵[Ӂ5ɰ1]5UJ3O*8 tiBQp@3ז&y؇16 ÂY( b1CT}󞼝(`cS`31F[1M5y=>Ac2TGEc@Zc vbL">̙ K7cޑ1Ot4FF}c# HǻZ/4a_GmpsY*7K[,}_*&q^|_&-[^FæO/w2˝r'|\!^^.f1:uB@(H$slJMeXJ]qYSon{Lt:;hDo~j+]iKӌWL}isR<9+vzsrtOmZ*FX[5w5.pY;ѰfgBwDP5gɁdIm4\>UrLߨ3leJagD -Ƙ-LۂU7).쨫L)vSu;eU}Rc㤋Xsߛ+9fIrue KdWUy/?[G1#:S~!MUj\,KNP(Is>5'OŅx{i[p| . g?Msd5,ЖMvOC"<|u>9mnߺP7XE>/ZIl×7Vӎ 5n]3A=Ju w:kDŽ'ٟݜkHK]JdwaZ8a}*ʏQTFF$26\~}kӋ|pQ8-I~9Ru*]A{Qvܕi{m^{نȥiK =}. &.rAc!;.\j=cW7O$sc\*EXJ1(} nt:";dvoA<֭4kcNc.;%Z[Nq.GS>R~;hrNߝNԋ[U;!D*Bj=iNKHRV{C~QHE#sR$1Oք P>]lz\Zق)Xd~+xg4ԙG36C~ŕw4J(EM݈~l8h/W_L:ԟ](}09#8[f42f䳡of9I=b{G?&VsYwfƟqgYUlc$>5 /ջ.S8w;?ƽO9´[Ҷ-B1ˣ>|zqɖ}v?gC0kۇhvVHw-[lqKJ ~<)oe| {Qի~r%]H~Y=88IfJۖWcνg)jQy۹sIS׺ˋTf4ryߜ> c'45b$M{TJ8A2Ķ.dߥ>_ZBrPa4gpp;?##WNnl攀I[''zR$H0b//>`†56R(f3 n-y?) O0z(/̡X|&׾7} \jfwP/gUps]l5S[V}㿐Q|" LUgZZH+oRmNOJo&ålBxe*̸M=@ ^' })#%uPڒ;$WID,wޱ0KĥcڇH)&NK j=Dz gzWۜ_>DJ?`Z#AD$1#$]!GK Ja&nq_ j /A&d1cԽ]M'/:F{8+O?uCW2-=_|ljY> <˘s|.ce1ϧj>iY[q92cޝÙiX K҄XJ[)q  )68qפtbv 7#K%npoGd}Φd=v[%JI \mDr"*}?o~ *Dz49C!9cO]쒨JR# |jFD<̊!1fKSb2N'ǜ(]b.JkES>_Z#Rtwl2+|q6d+ڑ&û9󪇾!婢Ήf[>17J@X1I8WU'>[ CnV{qrq4X+ڴ^@edo1Hݠlv|߃p{&eh3\H[>ɝ{?)B9sJB8:b1c Q VYY,8@<dF:x ,VUtOD !Dd3 y $` *c0RQ`=lwK\jP5 *v ls޻Qȗb÷1`bD=+7J_LcHFj+_4mۮ.gX^+Ԑ}}v {Qjr\UdC}5/I<ŹY:"5eWdw$mlAXA)pTE+WI iɾTNèerMIjOw)])7`1>lT[c1RHHٓcBM6tҝzkXxE5ڂŐP *˃^h 2"8[U awL M4QD\6Y,bW$%q!S:sJWʩ1Xg@.oHՆƹ2؛gLq~+kSd`fsBG.j9Ġ ̙3Ÿuh`bId6 7l#Ut^9$Q翟Wߟnx9OבɯϟDDI޿ j1w: #x 7>[|c+yo.\[ UPon\?暝"HşMߤtu/UInb*)h$} + .HP/CI;t }NSX4Otw;8SO=\ o$"7M+П_U+G~sl[g'YAo>O~=DxW? ޲09Ch[FT1k5<[N *>Q] |~5^ɷ~p# Nw&͉tGyMK@C'jdZ?,]MGm߾p.+u_7~_޼ft1h5ɻ _?#L oo{Gp^t:o·t& 'h~~;`ɿ޷a]ОbEbpun4rq g 4ȏ`&$~ل qS}#wjq_ƽS^x'ZSvVO.Bܥe63j)02ypPÌ!"arKyF,er]cJwݬvS ;(7AL Ubכ=n3 ,皍c [;vėޗ#!x>siSQ^ƌJ9"GﺗLGL-(Nd8| \\{8 `{}xMÒ )wuTBo"WtFWV0nާJkv {Vv9b0377nj"9ԃq{_[G!4 :MojBݔO?:Ѳ *L@KJIG̷) VabeˮE$2a<4IF7Ic0$k1Ƭx,H޲۰pXX xz$?ANX^uSkA Ev([.L4Z7a==4afZD&{q-.\OT0 6-{x4KDm=ڠhԶht48婡嶴ѺT G,qe0ۗROu:)m_ฮJ A B>zɛBK%4NT8z\ vzyU(ꖗcX ZmَJ%prcX" +B9I;t*4;BjXQhFнC|03$&㦒j:m_!1f:$b-R^y*VUӽ#TXn.rn2I"̹{M %)& yG4.u=i]=b?g:8֞R64 /2-Zȴšd0*$QՁ*#:+##,v([:JؕcrepH!9ib_"G˟rh(wr+Wj&Nw}׺LHu\l?t@lf6G9`4K|xo(c#h\4 .S:]=~[j&K+}ַ@7dw}o'AVL8'濟6~v Egݽg V4&ogj:cssrz(MGd*O؃mƺaIX=9bp# BugWP\ؐ/!F V;YHSŨGx/o*JY D|88'ܸ@2c*f"dX(M*a?^T z:~aIT19=R@̺ĞаMi߿ oRXPyv&P\׾ʖTj0;᷌ kAR 34 cKK'2eB˹`(J28dʝ*7t򷾬$bhZV)ߓ(YViΑGyR\rZV =0+ҒԲJ99TR>zJSѻP^cXSP(QXbcHQ" Wi\UR]bO.%cVr'̥h N R4(KN kg-JyvWAċ!^pIJ7ҩ :j: fsGSZx)4}QeQz @ ND ?$W#(EEtfd[$̯?lَ۲t֌#VbU,rbl!wķbâ|#=]Ijvu.bFЃyt7FԻ ~R^oˏQˆ{?G)f؁pst¯:kWCG'ru=.tR,X[JVp2-V0uariE<9Οw7)=]\sժ6iK29grZ#lw/nh7 bbQ9wďI/>ۯb|vyAVat>OovĦ]@Y}%8ڸ1;u֪/>ZqmjGelY=9c;K12\1ǢbC=`-+ !Ѿa !ibm$Vs܏_Tubݲh@ !57'S1Ldv00A3#E$WeSde5m :jsav@zE @Z^Bwfo tFC*38{kۣC cӣFp:y(0cәr=hXW);Tz+&5 :Ζv%c44[kJhj%9Ʃ}℃ٰcP kfFK~_K:GpciPD=ҕ=q\{^c+dq׫" ]:d>Kʒ J>Q'&:ؤc 7HHr'nC2q٘|4E'^J)W^ m'C3&@I9&u\I\::Jo6^:CZ"XBo Qd( G%K B}$r$I-x~!ÆptUO1P=ǒ߇X[57XN76TCG`$TU0^\{C+r*;PM~2oZD ő1DKh EAcB]M.ę-9aT7lԪ>磰ۨ>]Z-ʹTJKntj!k7YZzZjFN,J評P#_wo^mUJ![ P;f:FuC/¨׮Zk~+^{n(@=AioϘl Bi{8n'~ ȵ#o(pz|Qƪ}i_ΉVc||}q{RZ/No㼶Pޤ^&-O?Tn=Tnp&-dԤ(]57/o*.;<,.Nsgw s ~ #ïCqTXsGbW5Oڡ5yR+e>c:{@u:mFPݱf }B/{r_덷v(ùs?Nڽ调8XP%[H>[B+7@j I3Κo'zf&TlH9h 12&kR<\q7O Y>ͮ醴~>Ū 0f75?}Ynz'/+ZeUm joߜ/-UyN|YV'mc`_fF sLN6,Bbd2d"stP]YWdkQ $V٩3wbWYˮKZfMoj4)_,qPKΜ9#23hLd'gD:@_tߠ]61 ,p)=jä/ľtrI;![=OZ~+!եW#kWS}j+WuS+-%Gnj+r6tBbHh00eV՜{jLCD|~ЊI30l" ܌kxi`+%79[YgLG }=BOf3@M2<)$! 2RӧOvj/s\d?Xs#`u![ZgGo>QېQ;Q񁣪5fX#A} W[)%bg)$e .bN@3Ԑ>P5jWL^y|h YhH%YJ4!x'g7جt\uc*uI}:hI6R)' 7 *)&F;g1fOfG"h5 #2Jj} ͏ԔԼ=NÞ͵Tr#sۂa#A֚4gՑLdj$jMN/oL872\3fr6fէkT$Gs644z>tէkTcYRPKՄn+jE=_?n.U[fZ>QxAZNܞ]kn'PkWof*aGӊ4o"=W*-OsQ-T)bmJ$z#EE"_f, Rk^E\ *lnԲM\;WհQFo c3p<,uH>gvƹҪzaR>PYɇ*#wA4]ߜ_s)󫋓P%۲mD{xLWXŧ;F<طapnC*Ix~uE'qBw TеV`;5\/~o.7ګB(iIe)j(6x,B|MƯؑ%"P!to],}7Ch.d4id} wJC2,,9d"EdA\q%dq6Dϯ=b깉R+QkWҜ+A*I;%~+qX&5CU-w9`!}2>j $$kqBXg/Ag\IBW\zd۩>}AuxNvxdH05R\(TfZ*V ڭJLK^R-F:1వTa3-UXintjakfZa/٥~Ue~uή="/k@  [ޙY16 qPTgLy_d%LސJ48r8~HчP',@ xu[c @ffNTPa+Mv=GWil 84FMC/ج`ܧ6 `>\˭ys^,wUbѬsg,g6 h`-f@0Ó~6ϙ_Nv Tİ GRW% Ic)ůnEY-<nFH,"VU+u]:T~fх,oUOFL- [``LPZL7ih4nY4vէkT0[VnmNS=^Gpjw{KH[O~Ӌʶ6fY0[蔗JX6q`}.>/ GY dm_f:a^2#io= "-Jz3m4vg!'7vT"z{wmP,e-:-RfB92vk("7Jj\0ϫwr.߿\ZF)Cӿ|_nio*aEqE?O>xO᜜$SQN2pU]'[Pw'W%U1i<1Q >1@0R~^k/X`mR+LrysLAS]EBYA+y<]OeVf'#-E0_r`梦0K9R6GALqM  vǩA ->[P,Ի3}gaNGK(V:r JX¦H.KDEĘFf(_YiVVGldQUyC#h< a/;5tCA0Iuغpz5!C/ Co"À\b)Er\1ϯߖ3K4fF/;r%fcab^4BtGAL ^C˝ah4( }?Ԙ8%DO:E였U\AĀ8K_Rp!x`⅃vy~7.Y#餪0U,ۻ߶q$ؕ#P$bqנO Y_;wGJMrLli"qii^9HW*yevPGA."ț I&/[QA4z}2 IߦFK.j=MIυ[meI?u:bօ ==͆v=$RUXLPaРmPoT&X8!U"B8=^`b:dKc_n2 knMlBOLǖ &s.MLry.'Hx9àu^TO$BBF P)KSxQ$_ԱJ"B䱧DQjfB PrBSi,Aav:61mX7: 3Y 'pf+~̺,Nòq]9FV\ҟ1u5Ha@OqA"0'{ ǒP|gƅe3NoĀ}>iT86#,n.jsdeZ'<й`z=J姈D|3h !p+dQ ]cbC \}|v GEISnYiq[?~Ujn8d fI&ch}F%'ԗ7 >T3(N?No4WEHQa0oQ ? Ab9htۜp0@}2Tj&Ggnשh*#M7MJP4$/z sl~R-*h7㍴Nb{<=&lfP~[rX`+}rХyx2R#PMdzv0*mAY/16ySߘߗ]t}E+rOoA?u{5T9ͯ 9s)$'oz7y x+}GvuVr`?C감3756eZF/f ̘̞!l9AA%q*2㺀cfA<pܞ$ęi=NPtx zi壂ޔLX] PC2zrel|vygԷt|yqfIKq郱d=9XPo? 4c&gҰ?J^C)?[Z Ska4/] 2^ 00LN`ɬk~A~])n??bXτ7KCES>X!;W5/׋>wk8F7ߎp?WRJ Q"qHLR<M *7>OԗJͻ<>KͷiIZ9ZWGj{ au ȆrE…+WsWћ Ө!M9G 7$7﴿VՄiKM--t~sy=כW yxFuK f o%@v 5W ק:Ś(Ƃ1)1gH& Sct OxGn1Vu6ݩKNu[qdـUl*LCIAb 2EQELQ# Q" dUPBH$SN:Q0b& gQY*ɦ ^lj]޲/f|/$Hɩm8 @@24ʐ",oK?ꘊA/:[5ʙP^^=p\!rB8F+,>|I>3Oʭ`L)e=+(K :k0 k6;T J*U2v[Q%Gѻ"8]_Z|gJxLmAW1a3seBVSQ YY~͜?>̟s 00؃.ܢۛNN>@;ܦZkg3>U_>_wtu_'e1&q~7fЂ =)tn˔Vc3ug* Pݰ!Aݕ+|66 r.O𠚲m-+Ԉ( 'wTSJN`ݮSH+wT$:?5:5@" 9s)A~ncn2p1wTnNi_감37Q/#Mfлt|QŻraܵo*9z:,MtͦM.N&v`ufO.vvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000003554641315135263527017721 0ustar rootrootJan 25 00:09:19 crc systemd[1]: Starting Kubernetes Kubelet... Jan 25 00:09:19 crc restorecon[4702]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:19 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 25 00:09:20 crc restorecon[4702]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 25 00:09:20 crc kubenswrapper[4947]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.903982 4947 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906553 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906576 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906583 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906588 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906593 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906598 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906604 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906611 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906617 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906623 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906626 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906630 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906634 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906640 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906654 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906659 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906664 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906668 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906672 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906677 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906682 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906686 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906689 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906693 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906696 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906700 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906704 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906708 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906712 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906717 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906721 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906726 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906730 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906735 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906740 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906745 4947 feature_gate.go:330] unrecognized feature gate: Example Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906750 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906755 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906760 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906765 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906770 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906777 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906781 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906787 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906793 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906797 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906802 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906806 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906811 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906817 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906822 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906827 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906831 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906836 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906840 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906847 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906853 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906857 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906863 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906867 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906872 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906877 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906882 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906886 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906893 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906899 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906903 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906909 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906913 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906918 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.906922 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907199 4947 flags.go:64] FLAG: --address="0.0.0.0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907214 4947 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907223 4947 flags.go:64] FLAG: --anonymous-auth="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907228 4947 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907233 4947 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907238 4947 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907243 4947 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907248 4947 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907253 4947 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907257 4947 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907261 4947 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907265 4947 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907269 4947 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907273 4947 flags.go:64] FLAG: --cgroup-root="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907278 4947 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907282 4947 flags.go:64] FLAG: --client-ca-file="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907286 4947 flags.go:64] FLAG: --cloud-config="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907289 4947 flags.go:64] FLAG: --cloud-provider="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907293 4947 flags.go:64] FLAG: --cluster-dns="[]" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907298 4947 flags.go:64] FLAG: --cluster-domain="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907302 4947 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907306 4947 flags.go:64] FLAG: --config-dir="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907310 4947 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907314 4947 flags.go:64] FLAG: --container-log-max-files="5" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907319 4947 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907324 4947 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907328 4947 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907332 4947 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907336 4947 flags.go:64] FLAG: --contention-profiling="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907340 4947 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907344 4947 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907348 4947 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907354 4947 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907360 4947 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907364 4947 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907368 4947 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907372 4947 flags.go:64] FLAG: --enable-load-reader="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907376 4947 flags.go:64] FLAG: --enable-server="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907380 4947 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907386 4947 flags.go:64] FLAG: --event-burst="100" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907390 4947 flags.go:64] FLAG: --event-qps="50" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907394 4947 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907398 4947 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907403 4947 flags.go:64] FLAG: --eviction-hard="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907408 4947 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907412 4947 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907416 4947 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907421 4947 flags.go:64] FLAG: --eviction-soft="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907424 4947 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907428 4947 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907432 4947 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907436 4947 flags.go:64] FLAG: --experimental-mounter-path="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907440 4947 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907444 4947 flags.go:64] FLAG: --fail-swap-on="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907448 4947 flags.go:64] FLAG: --feature-gates="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907453 4947 flags.go:64] FLAG: --file-check-frequency="20s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907457 4947 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907461 4947 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907465 4947 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907469 4947 flags.go:64] FLAG: --healthz-port="10248" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907473 4947 flags.go:64] FLAG: --help="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907477 4947 flags.go:64] FLAG: --hostname-override="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907481 4947 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907486 4947 flags.go:64] FLAG: --http-check-frequency="20s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907490 4947 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907494 4947 flags.go:64] FLAG: --image-credential-provider-config="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907499 4947 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907502 4947 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907507 4947 flags.go:64] FLAG: --image-service-endpoint="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907511 4947 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907515 4947 flags.go:64] FLAG: --kube-api-burst="100" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907519 4947 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907524 4947 flags.go:64] FLAG: --kube-api-qps="50" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907528 4947 flags.go:64] FLAG: --kube-reserved="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907532 4947 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907536 4947 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907540 4947 flags.go:64] FLAG: --kubelet-cgroups="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907544 4947 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907548 4947 flags.go:64] FLAG: --lock-file="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907552 4947 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907557 4947 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907561 4947 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907567 4947 flags.go:64] FLAG: --log-json-split-stream="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907572 4947 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907576 4947 flags.go:64] FLAG: --log-text-split-stream="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907580 4947 flags.go:64] FLAG: --logging-format="text" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907584 4947 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907589 4947 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907593 4947 flags.go:64] FLAG: --manifest-url="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907597 4947 flags.go:64] FLAG: --manifest-url-header="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907603 4947 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907607 4947 flags.go:64] FLAG: --max-open-files="1000000" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907612 4947 flags.go:64] FLAG: --max-pods="110" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907617 4947 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907621 4947 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907625 4947 flags.go:64] FLAG: --memory-manager-policy="None" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907629 4947 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907634 4947 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907643 4947 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907647 4947 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907656 4947 flags.go:64] FLAG: --node-status-max-images="50" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907661 4947 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907665 4947 flags.go:64] FLAG: --oom-score-adj="-999" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907669 4947 flags.go:64] FLAG: --pod-cidr="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907673 4947 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907680 4947 flags.go:64] FLAG: --pod-manifest-path="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907683 4947 flags.go:64] FLAG: --pod-max-pids="-1" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907688 4947 flags.go:64] FLAG: --pods-per-core="0" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907692 4947 flags.go:64] FLAG: --port="10250" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907696 4947 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907721 4947 flags.go:64] FLAG: --provider-id="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907725 4947 flags.go:64] FLAG: --qos-reserved="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907730 4947 flags.go:64] FLAG: --read-only-port="10255" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907734 4947 flags.go:64] FLAG: --register-node="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907738 4947 flags.go:64] FLAG: --register-schedulable="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907742 4947 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907749 4947 flags.go:64] FLAG: --registry-burst="10" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907753 4947 flags.go:64] FLAG: --registry-qps="5" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907757 4947 flags.go:64] FLAG: --reserved-cpus="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907761 4947 flags.go:64] FLAG: --reserved-memory="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907766 4947 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907771 4947 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907775 4947 flags.go:64] FLAG: --rotate-certificates="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907779 4947 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907783 4947 flags.go:64] FLAG: --runonce="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907787 4947 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907791 4947 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907795 4947 flags.go:64] FLAG: --seccomp-default="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907799 4947 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907804 4947 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907808 4947 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907814 4947 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907818 4947 flags.go:64] FLAG: --storage-driver-password="root" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907822 4947 flags.go:64] FLAG: --storage-driver-secure="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907826 4947 flags.go:64] FLAG: --storage-driver-table="stats" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907830 4947 flags.go:64] FLAG: --storage-driver-user="root" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907834 4947 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907838 4947 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907842 4947 flags.go:64] FLAG: --system-cgroups="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907846 4947 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907854 4947 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907858 4947 flags.go:64] FLAG: --tls-cert-file="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907862 4947 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907866 4947 flags.go:64] FLAG: --tls-min-version="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907870 4947 flags.go:64] FLAG: --tls-private-key-file="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907874 4947 flags.go:64] FLAG: --topology-manager-policy="none" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907878 4947 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907882 4947 flags.go:64] FLAG: --topology-manager-scope="container" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907886 4947 flags.go:64] FLAG: --v="2" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907891 4947 flags.go:64] FLAG: --version="false" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907896 4947 flags.go:64] FLAG: --vmodule="" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907901 4947 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.907906 4947 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908002 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908008 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908012 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908016 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908020 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908025 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908030 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908035 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908040 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908046 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908052 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908058 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908063 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908067 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908072 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908075 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908079 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908083 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908086 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908090 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908094 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908097 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908101 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908105 4947 feature_gate.go:330] unrecognized feature gate: Example Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908110 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908113 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908117 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908134 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908138 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908142 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908145 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908149 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908153 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908156 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908160 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908165 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908169 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908172 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908176 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908180 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908185 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908189 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908194 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908198 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908202 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908205 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908209 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908212 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908216 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908220 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908225 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908228 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908232 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908237 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908241 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908245 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908249 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908253 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908257 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908261 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908266 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908270 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908273 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908277 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908281 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908284 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908288 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908292 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908299 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908302 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.908306 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.908319 4947 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.917560 4947 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.917578 4947 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917645 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917651 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917655 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917659 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917663 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917667 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917671 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917674 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917678 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917682 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917686 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917689 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917693 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917696 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917700 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917704 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917708 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917711 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917715 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917719 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917723 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917727 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917731 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917736 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917742 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917748 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917754 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917759 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917763 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917767 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917771 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917775 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917778 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917782 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917787 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917791 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917795 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917800 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917805 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917809 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917812 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917816 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917820 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917823 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917827 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917830 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917833 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917837 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917841 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917844 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917848 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917852 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917856 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917861 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917866 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917870 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917874 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917878 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917882 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917886 4947 feature_gate.go:330] unrecognized feature gate: Example Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917890 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917893 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917897 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917901 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917904 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917908 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917911 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917915 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917919 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917922 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.917926 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.917932 4947 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918029 4947 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918035 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918040 4947 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918045 4947 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918049 4947 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918053 4947 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918056 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918060 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918064 4947 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918067 4947 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918072 4947 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918077 4947 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918080 4947 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918084 4947 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918088 4947 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918092 4947 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918096 4947 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918100 4947 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918103 4947 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918107 4947 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918110 4947 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918114 4947 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918118 4947 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918142 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918146 4947 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918150 4947 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918153 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918157 4947 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918160 4947 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918164 4947 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918167 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918171 4947 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918175 4947 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918178 4947 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918188 4947 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918192 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918196 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918201 4947 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918205 4947 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918210 4947 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918215 4947 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918220 4947 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918225 4947 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918228 4947 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918232 4947 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918236 4947 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918239 4947 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918243 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918247 4947 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918251 4947 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918255 4947 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918259 4947 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918264 4947 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918269 4947 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918273 4947 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918277 4947 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918281 4947 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918285 4947 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918289 4947 feature_gate.go:330] unrecognized feature gate: Example Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918292 4947 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918296 4947 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918299 4947 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918303 4947 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918307 4947 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918310 4947 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918314 4947 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918318 4947 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918322 4947 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918327 4947 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918331 4947 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 25 00:09:20 crc kubenswrapper[4947]: W0125 00:09:20.918335 4947 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.918340 4947 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.918465 4947 server.go:940] "Client rotation is on, will bootstrap in background" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.922113 4947 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.922237 4947 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.922693 4947 server.go:997] "Starting client certificate rotation" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.922712 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.922930 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-11 15:44:47.555907348 +0000 UTC Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.923074 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.928706 4947 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 25 00:09:20 crc kubenswrapper[4947]: E0125 00:09:20.931905 4947 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.935290 4947 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.946546 4947 log.go:25] "Validated CRI v1 runtime API" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.969407 4947 log.go:25] "Validated CRI v1 image API" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.971524 4947 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.974863 4947 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-25-00-04-45-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 25 00:09:20 crc kubenswrapper[4947]: I0125 00:09:20.974914 4947 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.002856 4947 manager.go:217] Machine: {Timestamp:2026-01-25 00:09:21.000512021 +0000 UTC m=+0.233502521 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:07b95270-97eb-4b89-897d-837b061280fd BootID:a468ef55-66d7-4612-bf14-5eff54a3bf14 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:65:36:04 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:65:36:04 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:6d:3e:57 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b3:ad:bb Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1c:9e:bc Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:b2:49:02 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:b0:5f:4b:63:14 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:f6:41:79:18:98:94 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.003331 4947 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.003586 4947 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.004293 4947 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.004616 4947 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.004673 4947 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.005115 4947 topology_manager.go:138] "Creating topology manager with none policy" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.005164 4947 container_manager_linux.go:303] "Creating device plugin manager" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.005405 4947 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.005658 4947 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.006186 4947 state_mem.go:36] "Initialized new in-memory state store" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.006388 4947 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.007724 4947 kubelet.go:418] "Attempting to sync node with API server" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.007763 4947 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.007859 4947 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.007885 4947 kubelet.go:324] "Adding apiserver pod source" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.007906 4947 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.012819 4947 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.012858 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.012966 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.012968 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.013054 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.013284 4947 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.014380 4947 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015016 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015043 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015053 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015061 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015095 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015104 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015113 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015144 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015156 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015168 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015184 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015196 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015422 4947 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.015943 4947 server.go:1280] "Started kubelet" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.016659 4947 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.016788 4947 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.017401 4947 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 25 00:09:21 crc systemd[1]: Started Kubernetes Kubelet. Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.018812 4947 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.019481 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188dd0c22c898057 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 00:09:21.015914583 +0000 UTC m=+0.248905033,LastTimestamp:2026-01-25 00:09:21.015914583 +0000 UTC m=+0.248905033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.020360 4947 server.go:460] "Adding debug handlers to kubelet server" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.021762 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.021886 4947 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.021925 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 23:11:46.709996903 +0000 UTC Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.022175 4947 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.022852 4947 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.022873 4947 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.022946 4947 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.023712 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.023831 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.023879 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024168 4947 factory.go:55] Registering systemd factory Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024210 4947 factory.go:221] Registration of the systemd container factory successfully Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024733 4947 factory.go:153] Registering CRI-O factory Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024773 4947 factory.go:221] Registration of the crio container factory successfully Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024870 4947 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024945 4947 factory.go:103] Registering Raw factory Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.024972 4947 manager.go:1196] Started watching for new ooms in manager Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.026601 4947 manager.go:319] Starting recovery of all containers Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042742 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042876 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042904 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042925 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042947 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042973 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.042993 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043014 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043040 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043060 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043079 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043100 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043149 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043175 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043195 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043219 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043238 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043258 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043278 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043299 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043318 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043338 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043360 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043381 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043403 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043424 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043448 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043473 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043538 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043608 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043628 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043690 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043710 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043730 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043750 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043769 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043790 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043811 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043830 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043851 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043907 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043929 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043949 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043970 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.043990 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044009 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044031 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044054 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044073 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044093 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044185 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044210 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044239 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044261 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044283 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044317 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044339 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044361 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044382 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044402 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044424 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044445 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044464 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044486 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044504 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044525 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044542 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044562 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.044584 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045574 4947 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045636 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045747 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045772 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045838 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045859 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045882 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045900 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045930 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045952 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.045980 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046001 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046021 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046042 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046060 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046083 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046103 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046123 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046183 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046202 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046224 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046245 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046264 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046282 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046302 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046324 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046346 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046365 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046387 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046412 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046481 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046501 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046526 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046545 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046564 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046586 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046614 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046640 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046662 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046682 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046704 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046725 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046748 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046769 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046790 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046812 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046833 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046852 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046871 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046889 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046908 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046931 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046952 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046973 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.046993 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047013 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047093 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047114 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047160 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047180 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047200 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047220 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047244 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047264 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047288 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047307 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047327 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047347 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047366 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047387 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047406 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047429 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047450 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047470 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047492 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047514 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047535 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047557 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047578 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047598 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047618 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047675 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047696 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047721 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047740 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047762 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047783 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047806 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047826 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047851 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047872 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047894 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047918 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047939 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047962 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.047982 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048005 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048024 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048043 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048064 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048083 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048108 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048151 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048171 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048255 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048273 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048294 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048312 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048331 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048350 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048371 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048391 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048411 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048433 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048451 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048472 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048493 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048513 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048533 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048552 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048571 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048592 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048613 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048632 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048652 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048670 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048690 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048709 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048729 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048750 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048767 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048788 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048807 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048833 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048852 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048871 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048890 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048909 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048930 4947 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048949 4947 reconstruct.go:97] "Volume reconstruction finished" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.048964 4947 reconciler.go:26] "Reconciler: start to sync state" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.064315 4947 manager.go:324] Recovery completed Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.083076 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.085272 4947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.086386 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.086431 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.086446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.087647 4947 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.087671 4947 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.087702 4947 state_mem.go:36] "Initialized new in-memory state store" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.088288 4947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.088361 4947 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.088410 4947 kubelet.go:2335] "Starting kubelet main sync loop" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.088497 4947 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.089920 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.090011 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.100789 4947 policy_none.go:49] "None policy: Start" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.101657 4947 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.101690 4947 state_mem.go:35] "Initializing new in-memory state store" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.122837 4947 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.155160 4947 manager.go:334] "Starting Device Plugin manager" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.155223 4947 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.155267 4947 server.go:79] "Starting device plugin registration server" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.155850 4947 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.155875 4947 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.156157 4947 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.156363 4947 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.156395 4947 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.167824 4947 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.188661 4947 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.188909 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.191169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.191640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.191662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.192015 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.192374 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.192507 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.192970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193323 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193607 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193704 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193926 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193984 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.193999 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194076 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194094 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194105 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194284 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194728 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.194787 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195045 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195090 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195195 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195216 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195345 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195404 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.195458 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196096 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196151 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196231 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196311 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196332 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196477 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196506 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196846 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.196929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.197184 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.197238 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.197253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.224557 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251487 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251562 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251626 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251659 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251694 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251744 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251780 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251810 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251840 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251920 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.251972 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.252010 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.252042 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.252073 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.252101 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.256792 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.258787 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.258886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.258906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.258946 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.259513 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353330 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353413 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353484 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353524 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353561 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353607 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353643 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353682 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353718 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353732 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353738 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353796 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353720 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354059 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354105 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354175 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354219 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354259 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353837 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354563 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353794 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353807 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353817 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354619 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353823 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.353878 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354716 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354715 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.354622 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.460237 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.462524 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.462555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.462563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.462588 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.463158 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.522863 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.527036 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.543082 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.566468 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.567724 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-fbb7dadb153c9a91b75f55e4a292c36345ee75d584515436f6c6788e1c35ff78 WatchSource:0}: Error finding container fbb7dadb153c9a91b75f55e4a292c36345ee75d584515436f6c6788e1c35ff78: Status 404 returned error can't find the container with id fbb7dadb153c9a91b75f55e4a292c36345ee75d584515436f6c6788e1c35ff78 Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.569526 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-331aa06eaaf78b4ef385a69502120661488de4a4490bc487edbee40bc40cf3c1 WatchSource:0}: Error finding container 331aa06eaaf78b4ef385a69502120661488de4a4490bc487edbee40bc40cf3c1: Status 404 returned error can't find the container with id 331aa06eaaf78b4ef385a69502120661488de4a4490bc487edbee40bc40cf3c1 Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.572556 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.587041 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-78b6c1162d16175dbc873c561984530bfa1e22e7a83a11b4e0430cafc67fe6c1 WatchSource:0}: Error finding container 78b6c1162d16175dbc873c561984530bfa1e22e7a83a11b4e0430cafc67fe6c1: Status 404 returned error can't find the container with id 78b6c1162d16175dbc873c561984530bfa1e22e7a83a11b4e0430cafc67fe6c1 Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.590503 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-0d724686ba079198403fd4cb80554349ad0e05c50edabb8b14fdaabf9708615c WatchSource:0}: Error finding container 0d724686ba079198403fd4cb80554349ad0e05c50edabb8b14fdaabf9708615c: Status 404 returned error can't find the container with id 0d724686ba079198403fd4cb80554349ad0e05c50edabb8b14fdaabf9708615c Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.625705 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.863310 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.864664 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.864718 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.864736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:21 crc kubenswrapper[4947]: I0125 00:09:21.864772 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.865423 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 25 00:09:21 crc kubenswrapper[4947]: W0125 00:09:21.926700 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:21 crc kubenswrapper[4947]: E0125 00:09:21.926800 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.020454 4947 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.022821 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 18:16:24.143945422 +0000 UTC Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.098117 4947 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e28d9e5ded99984c96a07848bed082c840a86b273e0809a7103e07e789b81147" exitCode=0 Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.098172 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e28d9e5ded99984c96a07848bed082c840a86b273e0809a7103e07e789b81147"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.098303 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fbb7dadb153c9a91b75f55e4a292c36345ee75d584515436f6c6788e1c35ff78"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.098425 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.100500 4947 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d" exitCode=0 Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.100588 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.100629 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0d724686ba079198403fd4cb80554349ad0e05c50edabb8b14fdaabf9708615c"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.100754 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101238 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101276 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101612 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.101673 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.103160 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.103200 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"78b6c1162d16175dbc873c561984530bfa1e22e7a83a11b4e0430cafc67fe6c1"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.106074 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923" exitCode=0 Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.106182 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.106260 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a207852b7b1543c03dc47677866a9e1fad4f8df8a89f47ef87183436862ee696"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.106480 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.107462 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.107491 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.107501 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.111375 4947 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="7f77f372463a9126e2e0c7904e58fcca5c3868b594d72878374d63b703decdb2" exitCode=0 Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.111466 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"7f77f372463a9126e2e0c7904e58fcca5c3868b594d72878374d63b703decdb2"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.111548 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"331aa06eaaf78b4ef385a69502120661488de4a4490bc487edbee40bc40cf3c1"} Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.111753 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.112907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.112948 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.112961 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.116569 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.117618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.117668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.117682 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: W0125 00:09:22.227096 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.227238 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:22 crc kubenswrapper[4947]: W0125 00:09:22.304224 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.304346 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.427591 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.439246 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188dd0c22c898057 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 00:09:21.015914583 +0000 UTC m=+0.248905033,LastTimestamp:2026-01-25 00:09:21.015914583 +0000 UTC m=+0.248905033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 00:09:22 crc kubenswrapper[4947]: W0125 00:09:22.567073 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.567186 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.666399 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.667931 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.667966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.667975 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.667999 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.668508 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Jan 25 00:09:22 crc kubenswrapper[4947]: I0125 00:09:22.945784 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 25 00:09:22 crc kubenswrapper[4947]: E0125 00:09:22.947079 4947 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.023689 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 17:44:08.48680075 +0000 UTC Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.115985 4947 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f93db88442ef5460a399110b43ee9b68fa585bb81ec5430c8873dc2d4f3cf725" exitCode=0 Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.116068 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f93db88442ef5460a399110b43ee9b68fa585bb81ec5430c8873dc2d4f3cf725"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.116248 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.117229 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.117262 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.117274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.119194 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"20298eba3286e5999a381eba946a8d66115b05b2c0b73c61c7c005aa95bd1f27"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.119367 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.120701 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.120735 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.120749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.123769 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.123802 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.123817 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.123909 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.124654 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.124699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.124713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.127897 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.127929 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.127930 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.128057 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.128681 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.128709 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.128722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131118 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131248 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131261 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131273 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131283 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42"} Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.131379 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.132066 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.132116 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.132154 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.323452 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:23 crc kubenswrapper[4947]: I0125 00:09:23.682757 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.024468 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 02:03:37.932403812 +0000 UTC Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138076 4947 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d10f9303fa6076a41afb25c4d043318794314c6cc59b46e03cd0bdd746b1e601" exitCode=0 Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138293 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d10f9303fa6076a41afb25c4d043318794314c6cc59b46e03cd0bdd746b1e601"} Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138374 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138531 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138557 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.138570 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140363 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140392 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140451 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.140676 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.268915 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.270917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.270982 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.271004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.271052 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:24 crc kubenswrapper[4947]: I0125 00:09:24.661817 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.024721 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:02:17.516977459 +0000 UTC Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.147573 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4a15a23addbc1b8cb1796b79902764e660dc4fc0580462bf7731757d5f1a0950"} Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.147681 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9965e2f5bdf4e230aeba0b28f3a4fdcc763f1aff38f22bc0e6943c0743c74230"} Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.147712 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.147781 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.149738 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.149812 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.149984 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.150027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.150197 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:25 crc kubenswrapper[4947]: I0125 00:09:25.150429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.024899 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 15:10:15.389053282 +0000 UTC Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.161530 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cdcfd5fa8ec227693c167e99fd7b156bb6bbaddd092b65155ab263cbd8660322"} Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.161603 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4ce3214b2ff8e7214adb833b87eca4bf7a980d278ddfa27b68ccbd2945bc47fc"} Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.161627 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e4244155a87fddbbcf4f9cab51737b8e3761a3b81afab80397c8e9ffff0a28d6"} Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.161666 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.163016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.163058 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.163079 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:26 crc kubenswrapper[4947]: I0125 00:09:26.999348 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.025061 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 11:26:45.394013713 +0000 UTC Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.047453 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.047919 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.049836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.049900 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.049934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.166825 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.167839 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.167890 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.167900 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.567867 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.568235 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.570080 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.570170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.570202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:27 crc kubenswrapper[4947]: I0125 00:09:27.576511 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.026089 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 15:44:33.259414726 +0000 UTC Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.170321 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.170658 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.171950 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.171996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.172015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.173170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.173210 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.173227 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.309928 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 25 00:09:28 crc kubenswrapper[4947]: I0125 00:09:28.409040 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.027296 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:55:05.656734428 +0000 UTC Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.179796 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.179834 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181627 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181644 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181744 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181785 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.181808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:29 crc kubenswrapper[4947]: I0125 00:09:29.489039 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.027507 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 23:13:24.071785701 +0000 UTC Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.166667 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.182672 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.182765 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184535 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184584 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:30 crc kubenswrapper[4947]: I0125 00:09:30.184603 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:31 crc kubenswrapper[4947]: I0125 00:09:31.027718 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:11:51.170274798 +0000 UTC Jan 25 00:09:31 crc kubenswrapper[4947]: E0125 00:09:31.167964 4947 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 25 00:09:32 crc kubenswrapper[4947]: I0125 00:09:32.028161 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:35:08.284027436 +0000 UTC Jan 25 00:09:32 crc kubenswrapper[4947]: I0125 00:09:32.489310 4947 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:09:32 crc kubenswrapper[4947]: I0125 00:09:32.489468 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 00:09:33 crc kubenswrapper[4947]: I0125 00:09:33.022737 4947 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 25 00:09:33 crc kubenswrapper[4947]: I0125 00:09:33.029044 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 23:47:21.693232188 +0000 UTC Jan 25 00:09:33 crc kubenswrapper[4947]: W0125 00:09:33.854226 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 25 00:09:33 crc kubenswrapper[4947]: I0125 00:09:33.854386 4947 trace.go:236] Trace[1536081183]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 00:09:23.852) (total time: 10001ms): Jan 25 00:09:33 crc kubenswrapper[4947]: Trace[1536081183]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:33.854) Jan 25 00:09:33 crc kubenswrapper[4947]: Trace[1536081183]: [10.001658458s] [10.001658458s] END Jan 25 00:09:33 crc kubenswrapper[4947]: E0125 00:09:33.854425 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 25 00:09:34 crc kubenswrapper[4947]: E0125 00:09:34.029157 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.029251 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 23:37:11.138368969 +0000 UTC Jan 25 00:09:34 crc kubenswrapper[4947]: E0125 00:09:34.272202 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 25 00:09:34 crc kubenswrapper[4947]: W0125 00:09:34.318406 4947 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.318546 4947 trace.go:236] Trace[529832670]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 00:09:24.316) (total time: 10001ms): Jan 25 00:09:34 crc kubenswrapper[4947]: Trace[529832670]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (00:09:34.318) Jan 25 00:09:34 crc kubenswrapper[4947]: Trace[529832670]: [10.001708631s] [10.001708631s] END Jan 25 00:09:34 crc kubenswrapper[4947]: E0125 00:09:34.318575 4947 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.662059 4947 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.662264 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.723717 4947 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 25 00:09:34 crc kubenswrapper[4947]: I0125 00:09:34.723803 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 25 00:09:35 crc kubenswrapper[4947]: I0125 00:09:35.030266 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 11:08:52.007240758 +0000 UTC Jan 25 00:09:36 crc kubenswrapper[4947]: I0125 00:09:36.030801 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:41:03.086785493 +0000 UTC Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.031218 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 21:16:22.971767243 +0000 UTC Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.472641 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.474622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.474658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.474668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:37 crc kubenswrapper[4947]: I0125 00:09:37.474716 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:37 crc kubenswrapper[4947]: E0125 00:09:37.481807 4947 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.032227 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:28:23.442228201 +0000 UTC Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.090312 4947 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.346848 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.347117 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.348909 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.348999 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.349023 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.366184 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.415101 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.415322 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.416761 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.416809 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:38 crc kubenswrapper[4947]: I0125 00:09:38.416823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.033105 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:43:52.734824074 +0000 UTC Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.210946 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.213193 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.213260 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.213283 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.671540 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.671810 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.673534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.673595 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.673618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.679275 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.722382 4947 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.725933 4947 trace.go:236] Trace[1964698199]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 00:09:25.411) (total time: 14314ms): Jan 25 00:09:39 crc kubenswrapper[4947]: Trace[1964698199]: ---"Objects listed" error: 14314ms (00:09:39.725) Jan 25 00:09:39 crc kubenswrapper[4947]: Trace[1964698199]: [14.314331381s] [14.314331381s] END Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.725954 4947 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.725967 4947 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.727604 4947 trace.go:236] Trace[1399948782]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Jan-2026 00:09:25.059) (total time: 14667ms): Jan 25 00:09:39 crc kubenswrapper[4947]: Trace[1399948782]: ---"Objects listed" error: 14667ms (00:09:39.727) Jan 25 00:09:39 crc kubenswrapper[4947]: Trace[1399948782]: [14.667614283s] [14.667614283s] END Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.727641 4947 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.774766 4947 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.800844 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.808555 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.857641 4947 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48884->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 25 00:09:39 crc kubenswrapper[4947]: I0125 00:09:39.857726 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:48884->192.168.126.11:17697: read: connection reset by peer" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.019842 4947 apiserver.go:52] "Watching apiserver" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.031193 4947 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.031492 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf"] Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.031978 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.031986 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.032047 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.032184 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.032249 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.032282 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.032298 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.032314 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.032321 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.033994 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 19:39:58.734447231 +0000 UTC Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.034903 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.035390 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.035775 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.035985 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.036722 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.037286 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.037951 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.038194 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.038308 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.061836 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.078694 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.087970 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.100990 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.115583 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.124684 4947 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.124971 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128668 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128722 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128750 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128778 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128801 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128823 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128849 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128879 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128903 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128925 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128949 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128972 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.128999 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129058 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129080 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129105 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129152 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129179 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129206 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129229 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129263 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129320 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129349 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129379 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129384 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129435 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129502 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129555 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129605 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129641 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129675 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129767 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129810 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129846 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129880 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129923 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129981 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130025 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130062 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130102 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130167 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130203 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130248 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130284 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130319 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130358 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130544 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130577 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130612 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130840 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130876 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130912 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130947 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131060 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131097 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131159 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131236 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131270 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131305 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131340 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131375 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131410 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131443 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131475 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131510 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131554 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131589 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131628 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131662 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131699 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131739 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131773 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131834 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131870 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131905 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131937 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131972 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132064 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132107 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132192 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132233 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132269 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132304 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132343 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132386 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132419 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132453 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132487 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132522 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132560 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132672 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132713 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132752 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132803 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132836 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133034 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133071 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133104 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133167 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133204 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133238 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133272 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133309 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129559 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133344 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129817 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129893 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.129914 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130027 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130070 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.130792 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131207 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131246 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131509 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131628 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131629 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.131942 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132047 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132577 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.132750 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.134066 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.134820 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.135708 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.135863 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136774 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136745 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137459 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137619 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137576 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136397 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137765 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136552 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136583 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136673 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136718 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137761 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137913 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.133378 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137965 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.137997 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138024 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138064 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138113 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138159 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138184 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138194 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138199 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138210 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.136352 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138376 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138439 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138502 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138560 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138617 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138676 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138735 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138793 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138852 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138909 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138962 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139016 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139071 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139181 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139245 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139304 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139362 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139418 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139473 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139528 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139591 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139656 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139716 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139776 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139872 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139925 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139980 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140038 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140093 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140196 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140257 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140308 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140374 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140433 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140489 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140547 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140607 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140666 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140724 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140788 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140868 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141986 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142063 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142171 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142234 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142292 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142351 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142415 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142478 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142541 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142602 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142667 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142768 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143083 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143184 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143240 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143328 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143376 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143514 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143688 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143743 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143798 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143887 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143935 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145386 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145553 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145584 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146585 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.155773 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156169 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156385 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156496 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156602 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156706 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.138464 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139076 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139116 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.160138 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139110 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139370 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139508 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.160187 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139502 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139572 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.139696 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140272 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140306 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.140501 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141380 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141511 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141783 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141828 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.141928 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142073 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142184 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.142323 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.160298 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.143012 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144460 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144476 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144596 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144683 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144703 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144717 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144765 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.144813 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145020 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145046 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145067 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145314 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145380 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145169 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145449 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145744 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145898 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.145969 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146091 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146199 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146265 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146449 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146470 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146466 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146820 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.146889 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.147527 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.147986 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.148273 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.148530 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.149444 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.149505 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.149709 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.149816 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.150081 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.150323 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.150810 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.151113 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.151497 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.151946 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.152547 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.152643 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.152864 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153027 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153503 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153658 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153769 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153780 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153809 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.154086 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.153853 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.154876 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.155357 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.155392 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.155572 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156195 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156370 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156612 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.156688 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.157097 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.157238 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.157283 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:09:40.657237646 +0000 UTC m=+19.890228286 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.157519 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.157534 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158084 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158354 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158163 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158545 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158599 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158857 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.158868 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161353 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161441 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161508 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161568 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161621 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161687 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161745 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161811 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161883 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.161952 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.162006 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.162060 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163377 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163454 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163489 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163568 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163595 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163620 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163652 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163686 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163717 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163739 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163770 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163793 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163813 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163837 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163971 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163986 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.163997 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164011 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164022 4947 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164033 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164044 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164056 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164069 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164080 4947 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164090 4947 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164101 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164112 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164121 4947 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164150 4947 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164165 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164176 4947 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164188 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164200 4947 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164212 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164222 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164233 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164247 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164259 4947 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164274 4947 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164285 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164294 4947 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164304 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164314 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164324 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164334 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164352 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164361 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164371 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164381 4947 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164393 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164402 4947 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164411 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164420 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164431 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164442 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164453 4947 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164462 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164472 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164482 4947 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164492 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164502 4947 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164513 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.164512 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.164606 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:40.664583462 +0000 UTC m=+19.897574132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164522 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164666 4947 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164695 4947 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164716 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164733 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164750 4947 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164766 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164783 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164799 4947 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164815 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164829 4947 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164845 4947 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164861 4947 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164882 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164901 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164916 4947 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164931 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164947 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164961 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164977 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.164992 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165007 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165029 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165047 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165062 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165076 4947 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165094 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165112 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165158 4947 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165175 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165191 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165204 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165219 4947 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165234 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165249 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165264 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165281 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165296 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165315 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165333 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165352 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165367 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165370 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165384 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165405 4947 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165420 4947 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165436 4947 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165451 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.165457 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.165585 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:40.665551376 +0000 UTC m=+19.898542026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165720 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165468 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165936 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165951 4947 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.165964 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166003 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166015 4947 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166025 4947 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166035 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166045 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166055 4947 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166067 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166079 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166091 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166103 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166116 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166139 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166150 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166161 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166172 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166183 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166193 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166202 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166212 4947 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166221 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166232 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166242 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166253 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166262 4947 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166272 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166281 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166290 4947 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166300 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166311 4947 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166320 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166331 4947 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166341 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166351 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166361 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.166814 4947 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.172857 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.174691 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.174974 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.175897 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.176719 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.177848 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.179055 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.179310 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.179332 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.179348 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.179417 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:40.679394099 +0000 UTC m=+19.912384539 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.179556 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.180475 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.183525 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.183559 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.183647 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.183669 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.183742 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:40.683718423 +0000 UTC m=+19.916708863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.186734 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.186952 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.187427 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.188598 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.192296 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.193933 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.194871 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195247 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195415 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195438 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195692 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195871 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.195873 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.196293 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.198576 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.198609 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.198648 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.199102 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.199878 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.200200 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.200286 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.200297 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.201258 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.201640 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.201952 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.202218 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.203182 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.204530 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.204710 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.204848 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.205780 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.206337 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.206817 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.206982 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.207235 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.207454 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.207533 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.208149 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.208515 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.208578 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.210247 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.211441 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.212826 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.216230 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.217080 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.217926 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.220358 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.223470 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.223815 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.224003 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.224235 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.225432 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.225712 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.229600 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.229989 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.233051 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.256079 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587" exitCode=255 Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.256308 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587"} Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267251 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267339 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267389 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267406 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267417 4947 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267428 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267441 4947 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267452 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267461 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267473 4947 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267486 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267498 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267511 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267522 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267534 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267544 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267554 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267565 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267576 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267586 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267596 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267608 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267617 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267626 4947 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267635 4947 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267648 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267658 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267668 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267678 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267690 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267700 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267710 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267721 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267734 4947 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267743 4947 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.267822 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.285074 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.298004 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.300580 4947 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.300734 4947 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.298420 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.300624 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.298276 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.300202 4947 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302254 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302324 4947 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302380 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302438 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302506 4947 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302578 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302648 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302713 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302770 4947 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302827 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302883 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302940 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.302999 4947 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303055 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303110 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303213 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303288 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303356 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303413 4947 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.303472 4947 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.311222 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.315959 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.316313 4947 scope.go:117] "RemoveContainer" containerID="86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.327586 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.333114 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.355682 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.368855 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.370494 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.376810 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.381033 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.391055 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.404453 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.404481 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.404491 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.404502 4947 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.405555 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: W0125 00:09:40.417740 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-89c43ee14330bafa5ccf697e907c218ae9778f4e3b321f71a8480da557845b5a WatchSource:0}: Error finding container 89c43ee14330bafa5ccf697e907c218ae9778f4e3b321f71a8480da557845b5a: Status 404 returned error can't find the container with id 89c43ee14330bafa5ccf697e907c218ae9778f4e3b321f71a8480da557845b5a Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.421422 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.443705 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.707820 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708058 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:09:41.708021725 +0000 UTC m=+20.941012175 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.708196 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.708249 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.708284 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:40 crc kubenswrapper[4947]: I0125 00:09:40.708306 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708408 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708429 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708446 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708454 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708466 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708474 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708481 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708490 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708518 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:41.708490906 +0000 UTC m=+20.941481386 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708549 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:41.708534687 +0000 UTC m=+20.941525167 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708579 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:41.708567758 +0000 UTC m=+20.941558238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:40 crc kubenswrapper[4947]: E0125 00:09:40.708600 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:41.708589748 +0000 UTC m=+20.941580228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.034359 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 21:57:31.093080334 +0000 UTC Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.092049 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.092728 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.093800 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.094571 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.095294 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.095810 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.097612 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.098228 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.099326 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.099865 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.100818 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.101625 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.102183 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.103050 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.103555 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.104438 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.104999 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.105460 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.106473 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.107011 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.107850 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.108432 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.108838 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.109844 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.110312 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.111394 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.112000 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.112892 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.113664 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.114634 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.115204 4947 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.115325 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.117256 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.118301 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.118809 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.120559 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.121877 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.122535 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.123610 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.124396 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.125469 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.126054 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.126104 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.127100 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.127889 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.128685 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.129207 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.130018 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.130709 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.131536 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.131991 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.132779 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.133284 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.133825 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.134607 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.145262 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.164804 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.192795 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.207817 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.253055 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.259618 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.259668 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.259680 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"497539bc10f42d322f3cc495f6e37e610eb1b2660005af423a8f5e9c051c1cbe"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.261795 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.261825 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"33557528254a67298d143fc3bc0c98a21c1500acadc8b9119eb0339f05944f10"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.263891 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.265801 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.265974 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.267156 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"89c43ee14330bafa5ccf697e907c218ae9778f4e3b321f71a8480da557845b5a"} Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.299488 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.319886 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.339742 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.357357 4947 csr.go:261] certificate signing request csr-lwkwk is approved, waiting to be issued Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.363423 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.368457 4947 csr.go:257] certificate signing request csr-lwkwk is issued Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.393160 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.412690 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.430791 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.465222 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.492629 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.505770 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.717895 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.717974 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.718000 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.718018 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:41 crc kubenswrapper[4947]: I0125 00:09:41.718037 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718176 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718217 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:09:43.718194284 +0000 UTC m=+22.951184724 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718288 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:43.718261935 +0000 UTC m=+22.951252375 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718338 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718447 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718522 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718550 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718461 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:43.718441709 +0000 UTC m=+22.951432139 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718342 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718656 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718675 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718671 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:43.718632504 +0000 UTC m=+22.951623064 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:41 crc kubenswrapper[4947]: E0125 00:09:41.718746 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:43.718715416 +0000 UTC m=+22.951705896 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.034836 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 08:43:57.201057183 +0000 UTC Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.089509 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.089611 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.089668 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.089731 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.089855 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.089972 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.255495 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-2w6nd"] Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.255920 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.259883 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mdgrh"] Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.260403 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.264017 4947 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.264051 4947 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.264078 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.264078 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.264194 4947 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.264259 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.264732 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.264847 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.265303 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.266099 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.269413 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.284957 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.306259 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.322505 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.338560 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.353810 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.370293 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-25 00:04:41 +0000 UTC, rotation deadline is 2026-11-11 17:24:01.652225139 +0000 UTC Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.370346 4947 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6977h14m19.281882658s for next certificate rotation Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.371078 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.390965 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.407549 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.421850 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.427678 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxfgl\" (UniqueName: \"kubernetes.io/projected/0a5c5a9a-cc45-4715-8e37-35798d843870-kube-api-access-gxfgl\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.428016 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f67ec28-baae-409e-a42d-03a486e7a26b-proxy-tls\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.428055 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqztm\" (UniqueName: \"kubernetes.io/projected/5f67ec28-baae-409e-a42d-03a486e7a26b-kube-api-access-hqztm\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.428093 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0a5c5a9a-cc45-4715-8e37-35798d843870-hosts-file\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.428119 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f67ec28-baae-409e-a42d-03a486e7a26b-mcd-auth-proxy-config\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.428183 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f67ec28-baae-409e-a42d-03a486e7a26b-rootfs\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.436298 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.454789 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.470152 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.485343 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.496327 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.505841 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.525837 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529045 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxfgl\" (UniqueName: \"kubernetes.io/projected/0a5c5a9a-cc45-4715-8e37-35798d843870-kube-api-access-gxfgl\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529107 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f67ec28-baae-409e-a42d-03a486e7a26b-proxy-tls\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529147 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqztm\" (UniqueName: \"kubernetes.io/projected/5f67ec28-baae-409e-a42d-03a486e7a26b-kube-api-access-hqztm\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529184 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0a5c5a9a-cc45-4715-8e37-35798d843870-hosts-file\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529208 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f67ec28-baae-409e-a42d-03a486e7a26b-mcd-auth-proxy-config\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529243 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f67ec28-baae-409e-a42d-03a486e7a26b-rootfs\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529319 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/5f67ec28-baae-409e-a42d-03a486e7a26b-rootfs\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.529414 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/0a5c5a9a-cc45-4715-8e37-35798d843870-hosts-file\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.534840 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5f67ec28-baae-409e-a42d-03a486e7a26b-proxy-tls\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.547110 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqztm\" (UniqueName: \"kubernetes.io/projected/5f67ec28-baae-409e-a42d-03a486e7a26b-kube-api-access-hqztm\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.552115 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.569664 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.583732 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.655264 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-kb5q7"] Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.655962 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.658104 4947 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.658218 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.658265 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvfwz"] Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.658493 4947 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.658533 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.658644 4947 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.658682 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: W0125 00:09:42.658774 4947 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 25 00:09:42 crc kubenswrapper[4947]: E0125 00:09:42.658807 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.659217 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.659777 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662099 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662355 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662437 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9fspn"] Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662673 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662695 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.662937 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.663356 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.664172 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.664263 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.666004 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.667191 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.679217 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.692003 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.707450 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.727015 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.743629 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.754939 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.767731 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.788468 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.813625 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.828743 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833432 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833517 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-binary-copy\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833556 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-cni-binary-copy\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833653 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-k8s-cni-cncf-io\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833735 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833813 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.833889 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.834708 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-daemon-config\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.834772 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9x7t\" (UniqueName: \"kubernetes.io/projected/2d914454-2c17-47f2-aa53-aba3bfaad296-kube-api-access-h9x7t\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.834808 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.834841 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.834946 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835017 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835118 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-netns\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835206 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-multus\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835257 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835309 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835385 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-kubelet\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835431 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-multus-certs\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835467 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-etc-kubernetes\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835502 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835543 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sssz9\" (UniqueName: \"kubernetes.io/projected/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-kube-api-access-sssz9\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835585 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-system-cni-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835619 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-cnibin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835696 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835730 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-hostroot\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835762 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835824 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835863 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-conf-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835893 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835940 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-os-release\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.835989 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836022 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-os-release\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836058 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh6bp\" (UniqueName: \"kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836089 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836117 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-bin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836202 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836234 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-system-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836264 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836296 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836372 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836489 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-socket-dir-parent\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836535 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.836579 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cnibin\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.846819 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.862352 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.879343 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.890954 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.912764 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.928757 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937561 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937625 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-bin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937663 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937701 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-system-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937734 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937772 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937782 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-bin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937816 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937802 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937841 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.937870 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-system-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938193 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-socket-dir-parent\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938314 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938203 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cnibin\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938505 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cnibin\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938214 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-socket-dir-parent\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938460 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938526 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938712 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-binary-copy\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938848 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-cni-binary-copy\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938927 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-k8s-cni-cncf-io\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938991 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938637 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939037 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-k8s-cni-cncf-io\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939075 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.938729 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939169 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939200 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939276 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939350 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-daemon-config\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939523 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9x7t\" (UniqueName: \"kubernetes.io/projected/2d914454-2c17-47f2-aa53-aba3bfaad296-kube-api-access-h9x7t\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939595 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939654 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939672 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939746 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939455 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939816 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939876 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-netns\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939899 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939920 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-multus\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939945 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-netns\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939951 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-cni-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939958 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.939977 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-cni-multus\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940000 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940009 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940036 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-kubelet\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940056 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-multus-certs\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940071 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-etc-kubernetes\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940085 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940105 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sssz9\" (UniqueName: \"kubernetes.io/projected/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-kube-api-access-sssz9\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940138 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-system-cni-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940154 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-cnibin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940154 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-run-multus-certs\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940177 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940183 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-etc-kubernetes\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940210 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-hostroot\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940195 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-host-var-lib-kubelet\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940236 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-system-cni-dir\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940242 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940192 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-hostroot\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940250 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-cnibin\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940268 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940272 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940298 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940318 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-conf-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940338 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940341 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-daemon-config\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940358 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940397 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-os-release\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940418 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-multus-conf-dir\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940421 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940500 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-os-release\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940539 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh6bp\" (UniqueName: \"kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940647 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-os-release\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940659 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2d914454-2c17-47f2-aa53-aba3bfaad296-os-release\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940696 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940328 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.940998 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.945853 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.962612 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.974522 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh6bp\" (UniqueName: \"kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp\") pod \"ovnkube-node-fvfwz\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.982181 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:42 crc kubenswrapper[4947]: I0125 00:09:42.995886 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:42Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.027854 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.035158 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 22:00:45.869167835 +0000 UTC Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.045095 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.061747 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.079446 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.097633 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.112548 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.225494 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.230654 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5f67ec28-baae-409e-a42d-03a486e7a26b-mcd-auth-proxy-config\") pod \"machine-config-daemon-mdgrh\" (UID: \"5f67ec28-baae-409e-a42d-03a486e7a26b\") " pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.240824 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.250220 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxfgl\" (UniqueName: \"kubernetes.io/projected/0a5c5a9a-cc45-4715-8e37-35798d843870-kube-api-access-gxfgl\") pod \"node-resolver-2w6nd\" (UID: \"0a5c5a9a-cc45-4715-8e37-35798d843870\") " pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.273402 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" exitCode=0 Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.273457 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.273504 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"ab91914d18e527be722f5e70489e90096dc0e627d44b69e63be506f96778e303"} Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.281913 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.288204 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.302312 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.314073 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.345232 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.362737 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.376991 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.391217 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.410232 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.431886 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.455212 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.469364 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2w6nd" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.474592 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.474668 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: W0125 00:09:43.484727 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0a5c5a9a_cc45_4715_8e37_35798d843870.slice/crio-03527a92fe9240a557dbc6fbf1e9cbdf74839b42e50ef361d8b9ea5a1de1de14 WatchSource:0}: Error finding container 03527a92fe9240a557dbc6fbf1e9cbdf74839b42e50ef361d8b9ea5a1de1de14: Status 404 returned error can't find the container with id 03527a92fe9240a557dbc6fbf1e9cbdf74839b42e50ef361d8b9ea5a1de1de14 Jan 25 00:09:43 crc kubenswrapper[4947]: W0125 00:09:43.491812 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f67ec28_baae_409e_a42d_03a486e7a26b.slice/crio-69cff864ded933b145fe6c03e7145916170a9b1b8617c4fb88bf02bb79d2e72c WatchSource:0}: Error finding container 69cff864ded933b145fe6c03e7145916170a9b1b8617c4fb88bf02bb79d2e72c: Status 404 returned error can't find the container with id 69cff864ded933b145fe6c03e7145916170a9b1b8617c4fb88bf02bb79d2e72c Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.492245 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.508227 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.560827 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.569762 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2d914454-2c17-47f2-aa53-aba3bfaad296-cni-binary-copy\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.569980 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-binary-copy\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.624810 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.641081 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.647390 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9x7t\" (UniqueName: \"kubernetes.io/projected/2d914454-2c17-47f2-aa53-aba3bfaad296-kube-api-access-h9x7t\") pod \"multus-9fspn\" (UID: \"2d914454-2c17-47f2-aa53-aba3bfaad296\") " pod="openshift-multus/multus-9fspn" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.647829 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sssz9\" (UniqueName: \"kubernetes.io/projected/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-kube-api-access-sssz9\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.746654 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.746757 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.746784 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.746807 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.746847 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.746992 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747011 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747027 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747075 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:47.747057757 +0000 UTC m=+26.980048197 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747436 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:09:47.747426296 +0000 UTC m=+26.980416736 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747474 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747498 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:47.747491967 +0000 UTC m=+26.980482407 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747541 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747561 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:47.747555059 +0000 UTC m=+26.980545499 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747601 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747610 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747617 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.747639 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:47.747629901 +0000 UTC m=+26.980620341 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.882953 4947 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.884876 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.884942 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.884999 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.885207 4947 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.887538 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9fspn" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.893959 4947 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.894232 4947 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.895083 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.895113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.895122 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.895156 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.895167 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:43Z","lastTransitionTime":"2026-01-25T00:09:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:43 crc kubenswrapper[4947]: W0125 00:09:43.905701 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d914454_2c17_47f2_aa53_aba3bfaad296.slice/crio-21be1f792542582a68e25cc335cf9f353faba700a027f5b07bf846179cf28c34 WatchSource:0}: Error finding container 21be1f792542582a68e25cc335cf9f353faba700a027f5b07bf846179cf28c34: Status 404 returned error can't find the container with id 21be1f792542582a68e25cc335cf9f353faba700a027f5b07bf846179cf28c34 Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.940030 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 25 00:09:43 crc kubenswrapper[4947]: I0125 00:09:43.941992 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/b89f0c74-3c8d-4e3f-8065-9e25a6749dcb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-kb5q7\" (UID: \"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\") " pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:43 crc kubenswrapper[4947]: E0125 00:09:43.990143 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:43Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.001662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.001705 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.001715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.001735 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.002084 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.030817 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.034243 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.034278 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.034291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.034308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.034321 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.035516 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 06:31:24.387800273 +0000 UTC Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.046980 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.052489 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.052555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.052576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.052604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.052617 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.065214 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.069244 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.069293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.069305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.069325 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.069339 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.080890 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.081007 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.083168 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.083215 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.083227 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.083259 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.083271 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.090064 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.090096 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.090080 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.090214 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.090318 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:44 crc kubenswrapper[4947]: E0125 00:09:44.090394 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.171250 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.185858 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.185900 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.185911 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.185927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.185937 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: W0125 00:09:44.190374 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb89f0c74_3c8d_4e3f_8065_9e25a6749dcb.slice/crio-fa247dc61cc46d54df17176fa390797ceda197ce4896459f830c403b7bd873a1 WatchSource:0}: Error finding container fa247dc61cc46d54df17176fa390797ceda197ce4896459f830c403b7bd873a1: Status 404 returned error can't find the container with id fa247dc61cc46d54df17176fa390797ceda197ce4896459f830c403b7bd873a1 Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.282845 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerStarted","Data":"fa247dc61cc46d54df17176fa390797ceda197ce4896459f830c403b7bd873a1"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.285770 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerStarted","Data":"e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.285806 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerStarted","Data":"21be1f792542582a68e25cc335cf9f353faba700a027f5b07bf846179cf28c34"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.289800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.289847 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.289859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.289879 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.289896 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.290668 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.295814 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.295866 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.295885 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"69cff864ded933b145fe6c03e7145916170a9b1b8617c4fb88bf02bb79d2e72c"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.299016 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2w6nd" event={"ID":"0a5c5a9a-cc45-4715-8e37-35798d843870","Type":"ContainerStarted","Data":"8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.299052 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2w6nd" event={"ID":"0a5c5a9a-cc45-4715-8e37-35798d843870","Type":"ContainerStarted","Data":"03527a92fe9240a557dbc6fbf1e9cbdf74839b42e50ef361d8b9ea5a1de1de14"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.303308 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305156 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305215 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305231 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305245 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305260 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.305273 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.318956 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.335362 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.353336 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.374454 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.391506 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.393713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.393757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.393770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.393792 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.393806 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.410902 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.431071 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.447172 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.465346 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.483618 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.494822 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.497244 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.497289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.497307 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.497334 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.497352 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.517362 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.544639 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.563025 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.576290 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.592156 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605799 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605846 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605704 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605872 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.605890 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.627382 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.642588 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.657098 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.670355 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.681085 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.693670 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.705998 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.708707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.708740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.708749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.708767 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.708777 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.719824 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:44Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.811980 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.812341 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.812488 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.812630 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.812787 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.915646 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.915693 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.915702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.915718 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:44 crc kubenswrapper[4947]: I0125 00:09:44.915729 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:44Z","lastTransitionTime":"2026-01-25T00:09:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.020266 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.020328 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.020341 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.020364 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.020381 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.036481 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:13:04.410255196 +0000 UTC Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.122701 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.122748 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.122756 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.122774 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.122787 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.225674 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.225745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.225768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.225795 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.225815 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.309821 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514" exitCode=0 Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.310043 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.325411 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hf8gg"] Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.326203 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.328429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.328520 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.328582 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.328640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.328883 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.329353 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.330935 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.331159 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.332558 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.347714 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.364772 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.385627 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.410715 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.424412 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.433698 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.433746 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.433764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.433786 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.433802 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.441883 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.465475 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-serviceca\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.465561 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-host\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.465609 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzgv2\" (UniqueName: \"kubernetes.io/projected/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-kube-api-access-xzgv2\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.467586 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.487756 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.503322 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.517269 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.538015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.538273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.538391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.538536 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.538656 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.543692 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.558773 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.567104 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-serviceca\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.567164 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-host\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.567195 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzgv2\" (UniqueName: \"kubernetes.io/projected/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-kube-api-access-xzgv2\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.567348 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-host\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.568681 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-serviceca\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.573858 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.588654 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.590509 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzgv2\" (UniqueName: \"kubernetes.io/projected/4f901695-ec8a-4fe2-ba5e-43e346b32ac3-kube-api-access-xzgv2\") pod \"node-ca-hf8gg\" (UID: \"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\") " pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.600166 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.612663 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.626629 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.642190 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.642231 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.642241 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.642257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.642268 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.643056 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.657175 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.660775 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hf8gg" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.678000 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.697832 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.727911 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.750388 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.752317 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.752362 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.752381 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.752406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.752433 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.762499 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.777566 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.795630 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.808846 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.854954 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.855009 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.855025 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.855045 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.855058 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.958142 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.958185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.958196 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.958216 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:45 crc kubenswrapper[4947]: I0125 00:09:45.958228 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:45Z","lastTransitionTime":"2026-01-25T00:09:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.037555 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 00:10:24.087491363 +0000 UTC Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.060722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.060771 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.060784 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.060808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.060820 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.089255 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:46 crc kubenswrapper[4947]: E0125 00:09:46.089386 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.089759 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:46 crc kubenswrapper[4947]: E0125 00:09:46.089814 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.089852 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:46 crc kubenswrapper[4947]: E0125 00:09:46.089899 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.162864 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.162902 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.162915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.162934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.162948 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.266047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.266680 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.266749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.266821 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.266898 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.316424 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32" exitCode=0 Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.316501 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.318319 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hf8gg" event={"ID":"4f901695-ec8a-4fe2-ba5e-43e346b32ac3","Type":"ContainerStarted","Data":"074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.318457 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hf8gg" event={"ID":"4f901695-ec8a-4fe2-ba5e-43e346b32ac3","Type":"ContainerStarted","Data":"eba71edf90e49316756e619d350602e2d939a9dc56b33fa0901779b9a6729afb"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.322998 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.336624 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.351457 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.370149 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.371374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.371408 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.371418 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.371437 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.371449 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.399370 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.415882 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.432776 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.453254 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.469703 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.474437 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.474490 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.474500 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.474515 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.474560 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.482011 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.497436 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.511239 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.523985 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.537510 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.558855 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.576322 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.577781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.577814 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.577826 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.577846 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.577860 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.587646 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.607288 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.627605 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.643094 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.656596 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.668455 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.680662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.680727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.680745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.680775 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.680792 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.683877 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.699885 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.712372 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.724253 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.743680 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.761407 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.772798 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:46Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.783997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.784073 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.784087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.784106 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.784518 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.887448 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.887511 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.887526 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.887550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.887566 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.990578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.990647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.990666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.990695 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:46 crc kubenswrapper[4947]: I0125 00:09:46.990716 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:46Z","lastTransitionTime":"2026-01-25T00:09:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.038304 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 01:31:49.556574073 +0000 UTC Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.094066 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.094154 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.094173 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.094197 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.094216 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.196937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.197004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.197028 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.197055 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.197075 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.299672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.299708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.299720 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.299737 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.299747 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.338456 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377" exitCode=0 Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.338949 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.365054 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.389496 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.403493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.403553 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.403564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.403584 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.403597 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.405588 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.428674 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.445301 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.462945 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.475698 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.489418 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.506120 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.506188 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.506198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.506215 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.506226 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.507466 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.522432 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.535638 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.558112 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.572814 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.585431 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:47Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.608951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.609016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.609028 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.609047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.609060 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.711797 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.711860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.711877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.711908 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.711927 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.794336 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.794497 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.794552 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.794587 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794664 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:09:55.79460605 +0000 UTC m=+35.027596530 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794766 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794820 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794861 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:55.794833805 +0000 UTC m=+35.027824285 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794860 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794925 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.794765 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794951 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:55.794917197 +0000 UTC m=+35.027907667 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794968 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.794996 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.795184 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:55.795158953 +0000 UTC m=+35.028149433 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.795215 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.795242 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:47 crc kubenswrapper[4947]: E0125 00:09:47.795348 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:55.795314906 +0000 UTC m=+35.028305386 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.816913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.816979 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.816996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.817020 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.817037 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.920813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.920867 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.920880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.920900 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:47 crc kubenswrapper[4947]: I0125 00:09:47.920914 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:47Z","lastTransitionTime":"2026-01-25T00:09:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.023589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.023644 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.023660 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.023681 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.023696 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.039030 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:32:06.371466463 +0000 UTC Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.089582 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:48 crc kubenswrapper[4947]: E0125 00:09:48.089795 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.090410 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.090487 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:48 crc kubenswrapper[4947]: E0125 00:09:48.090714 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:48 crc kubenswrapper[4947]: E0125 00:09:48.090914 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.127357 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.127422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.127443 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.127479 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.127498 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.231004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.231101 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.231164 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.231204 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.231236 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.334880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.334932 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.334945 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.334965 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.335007 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.345390 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581" exitCode=0 Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.345459 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.368052 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.395442 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.411194 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.433846 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.438545 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.438581 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.438589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.438605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.438615 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.447736 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.463952 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.477438 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.489962 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.506299 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.522177 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.536182 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.540847 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.540879 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.540888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.540904 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.540913 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.551459 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.567326 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.582704 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:48Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.643493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.643529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.643539 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.643555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.643564 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.746822 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.746859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.746868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.746887 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.746898 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.851305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.851368 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.851386 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.851409 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.851429 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.954711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.955207 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.955228 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.955257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:48 crc kubenswrapper[4947]: I0125 00:09:48.955275 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:48Z","lastTransitionTime":"2026-01-25T00:09:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.039244 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 08:06:43.623050825 +0000 UTC Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.059108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.059172 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.059185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.059203 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.059214 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.162311 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.162384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.162398 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.162422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.162442 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.266176 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.266275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.266302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.266343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.266368 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.356022 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerStarted","Data":"0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.362241 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.362842 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.362882 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.369349 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.369423 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.369452 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.369480 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.369498 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.383994 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.396576 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.406967 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.428113 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.447887 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.464002 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.472195 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.472249 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.472268 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.472294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.472314 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.478693 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.503987 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.521803 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.543554 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.563297 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.585523 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.599179 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.599238 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.599258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.599287 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.599313 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.611335 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.627645 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.644022 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.659276 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.683247 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.703361 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.703414 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.703437 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.703464 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.703482 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.708121 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.724914 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.768043 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.789780 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.806652 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.806722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.806739 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.806767 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.806792 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.809686 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.827936 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.843948 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.859673 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.881365 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.900271 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.910336 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.910390 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.910408 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.910434 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.910451 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:49Z","lastTransitionTime":"2026-01-25T00:09:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.917224 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:49 crc kubenswrapper[4947]: I0125 00:09:49.943677 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:49Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.013176 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.013212 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.013225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.013246 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.013258 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.040242 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 01:28:43.541106747 +0000 UTC Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.089001 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.089074 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:50 crc kubenswrapper[4947]: E0125 00:09:50.089265 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:50 crc kubenswrapper[4947]: E0125 00:09:50.089385 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.089539 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:50 crc kubenswrapper[4947]: E0125 00:09:50.089859 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.116191 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.116460 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.116651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.116995 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.117182 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.220506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.220802 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.220873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.220937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.221002 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.324031 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.324295 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.324383 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.324493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.324621 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.369401 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee" exitCode=0 Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.369831 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.370243 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.386960 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.406662 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.427866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.427907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.427917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.427934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.427946 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.428987 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.445627 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.450479 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.469007 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.482301 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.501358 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.524584 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.531019 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.531075 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.531096 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.531152 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.531177 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.543469 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.561789 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.588304 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.612327 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.629483 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.633925 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.633987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.634001 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.634021 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.634034 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.643417 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.657582 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.671701 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.686083 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.698722 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.712679 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.732220 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.737056 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.737112 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.737144 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.737165 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.737180 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.746851 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.760001 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.772776 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.784849 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.802427 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.821567 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.833774 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.839790 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.839944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.840038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.840159 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.840319 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.856699 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:50Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.924959 4947 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.945861 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.945910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.945922 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.945951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:50 crc kubenswrapper[4947]: I0125 00:09:50.945964 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:50Z","lastTransitionTime":"2026-01-25T00:09:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.041007 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:37:23.289736808 +0000 UTC Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.050345 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.050422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.050443 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.050473 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.050492 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.105834 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.127310 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.152036 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.154719 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.154778 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.154800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.154827 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.154847 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.176678 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.193793 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.208328 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.219666 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.234428 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.257617 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.257663 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.257859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.257882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.257896 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.260286 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.271907 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.301532 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.322514 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.343252 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.357325 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.362170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.362246 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.362266 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.362296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.362317 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.379070 4947 generic.go:334] "Generic (PLEG): container finished" podID="b89f0c74-3c8d-4e3f-8065-9e25a6749dcb" containerID="2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273" exitCode=0 Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.379146 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerDied","Data":"2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.403025 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.427855 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.458418 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.472709 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.482166 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.482186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.482215 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.482230 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.499833 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.519994 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.535928 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.563290 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584302 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584465 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584540 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584569 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.584591 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.601490 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.618918 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.645633 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.669341 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.687245 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.687289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.687301 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.687322 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.687335 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.688477 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.703365 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:51Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.791470 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.791546 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.791564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.791591 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.791612 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.894521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.894563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.894578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.894599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.894617 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.997911 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.997990 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.998008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.998041 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:51 crc kubenswrapper[4947]: I0125 00:09:51.998064 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:51Z","lastTransitionTime":"2026-01-25T00:09:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.041255 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 21:34:29.797059473 +0000 UTC Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.089553 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.089621 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.089708 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:52 crc kubenswrapper[4947]: E0125 00:09:52.089791 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:52 crc kubenswrapper[4947]: E0125 00:09:52.089926 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:52 crc kubenswrapper[4947]: E0125 00:09:52.090063 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.103289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.103352 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.103438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.103463 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.103495 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.192314 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.206499 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.206552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.206564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.206584 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.206598 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.211805 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.232420 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.247237 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.263925 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.285481 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.300823 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.309198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.309248 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.309261 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.309282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.309299 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.324063 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.340252 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.358467 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.375829 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.389199 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/0.log" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.392642 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947" exitCode=1 Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.392724 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.396688 4947 scope.go:117] "RemoveContainer" containerID="6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.400191 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.402322 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" event={"ID":"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb","Type":"ContainerStarted","Data":"aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.411668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.411742 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.411771 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.411806 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.411831 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.418367 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.433728 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.451924 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.463936 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.477208 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.489971 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.505008 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.516484 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.516521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.516532 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.516550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.516560 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.519012 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.536699 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.548580 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.560914 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.572381 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.583626 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.602063 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.619570 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.619600 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.619613 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.619632 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.619645 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.621639 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.634211 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.660627 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0125 00:09:51.640841 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0125 00:09:51.640872 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0125 00:09:51.640905 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0125 00:09:51.640937 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0125 00:09:51.640906 6212 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0125 00:09:51.640988 6212 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0125 00:09:51.641027 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0125 00:09:51.641034 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0125 00:09:51.641085 6212 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0125 00:09:51.641099 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0125 00:09:51.641163 6212 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0125 00:09:51.641242 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0125 00:09:51.641275 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0125 00:09:51.641347 6212 factory.go:656] Stopping watch factory\\\\nI0125 00:09:51.641363 6212 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:52Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.722970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.723024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.723043 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.723070 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.723087 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.825910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.825983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.826002 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.826030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.826049 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.929366 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.929715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.929729 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.929748 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:52 crc kubenswrapper[4947]: I0125 00:09:52.929761 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:52Z","lastTransitionTime":"2026-01-25T00:09:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.033013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.033060 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.033072 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.033091 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.033103 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.042550 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 15:08:17.289131796 +0000 UTC Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.135923 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.135988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.136005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.136029 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.136045 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.239819 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.239871 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.239896 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.239916 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.239928 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.342198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.342253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.342270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.342289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.342302 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.413594 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/0.log" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.418932 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.435816 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.444884 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.444937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.444951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.444972 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.444986 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.450645 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.460073 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.476340 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.491960 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.506666 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.521605 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.538974 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.548485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.548547 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.548566 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.548595 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.548616 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.556498 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.572122 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.592541 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.614496 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.632333 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.652760 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.652853 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.652885 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.652936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.652965 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.663859 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0125 00:09:51.640841 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0125 00:09:51.640872 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0125 00:09:51.640905 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0125 00:09:51.640937 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0125 00:09:51.640906 6212 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0125 00:09:51.640988 6212 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0125 00:09:51.641027 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0125 00:09:51.641034 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0125 00:09:51.641085 6212 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0125 00:09:51.641099 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0125 00:09:51.641163 6212 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0125 00:09:51.641242 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0125 00:09:51.641275 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0125 00:09:51.641347 6212 factory.go:656] Stopping watch factory\\\\nI0125 00:09:51.641363 6212 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.756027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.756071 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.756082 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.756098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.756107 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.858803 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.858863 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.858881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.858906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.858926 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.961491 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.961593 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.961616 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.961644 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:53 crc kubenswrapper[4947]: I0125 00:09:53.961665 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:53Z","lastTransitionTime":"2026-01-25T00:09:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.043597 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 20:39:10.274130648 +0000 UTC Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.063984 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.064194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.064262 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.064321 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.064376 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.089073 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.089238 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.089077 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.089380 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.089515 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.089626 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.145996 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.146067 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.146085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.146113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.146161 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.167693 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.174601 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.174674 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.174700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.174725 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.174742 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.194585 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.199832 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.199891 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.199913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.199939 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.199958 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.219325 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.250084 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.250180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.250205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.250236 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.250260 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.272649 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.279294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.279352 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.279370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.279393 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.279407 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.304952 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.305069 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.306880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.306918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.306927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.306945 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.306958 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.409966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.410050 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.410074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.410107 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.410203 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.424100 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/1.log" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.425429 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/0.log" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.429311 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e" exitCode=1 Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.429362 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.429437 4947 scope.go:117] "RemoveContainer" containerID="6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.430890 4947 scope.go:117] "RemoveContainer" containerID="911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e" Jan 25 00:09:54 crc kubenswrapper[4947]: E0125 00:09:54.431259 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.450859 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.470607 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.487477 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.508529 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0125 00:09:51.640841 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0125 00:09:51.640872 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0125 00:09:51.640905 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0125 00:09:51.640937 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0125 00:09:51.640906 6212 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0125 00:09:51.640988 6212 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0125 00:09:51.641027 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0125 00:09:51.641034 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0125 00:09:51.641085 6212 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0125 00:09:51.641099 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0125 00:09:51.641163 6212 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0125 00:09:51.641242 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0125 00:09:51.641275 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0125 00:09:51.641347 6212 factory.go:656] Stopping watch factory\\\\nI0125 00:09:51.641363 6212 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.514507 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.514557 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.514578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.514604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.514622 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.524049 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.539271 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.557347 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.580003 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.600466 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.618834 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.618913 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.618930 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.618963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.618982 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.627206 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.653017 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.672065 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.689951 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.706939 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:54Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.722962 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.723031 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.723057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.723096 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.723172 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.826587 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.826655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.826688 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.826719 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.826740 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.929646 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.929693 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.929705 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.929755 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:54 crc kubenswrapper[4947]: I0125 00:09:54.929767 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:54Z","lastTransitionTime":"2026-01-25T00:09:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.032917 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.033016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.033048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.033083 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.033107 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.045424 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 17:52:42.773281739 +0000 UTC Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.137042 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.137189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.137219 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.137255 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.137278 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.192671 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm"] Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.193312 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.198361 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.200865 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.213329 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.226352 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.240295 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.241852 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.241907 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.241932 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.241964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.241992 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.264252 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6ccc6bd4276b9c0ae810c5d3145b10e523372c4f2d00e1c63c4757025e8b9947\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"message\\\":\\\"y (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0125 00:09:51.640841 6212 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0125 00:09:51.640872 6212 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0125 00:09:51.640905 6212 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0125 00:09:51.640937 6212 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0125 00:09:51.640906 6212 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0125 00:09:51.640988 6212 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0125 00:09:51.641027 6212 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0125 00:09:51.641034 6212 handler.go:208] Removed *v1.Node event handler 2\\\\nI0125 00:09:51.641085 6212 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0125 00:09:51.641099 6212 handler.go:208] Removed *v1.Node event handler 7\\\\nI0125 00:09:51.641163 6212 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0125 00:09:51.641242 6212 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0125 00:09:51.641275 6212 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0125 00:09:51.641347 6212 factory.go:656] Stopping watch factory\\\\nI0125 00:09:51.641363 6212 handler.go:208] Removed *v1.EgressFirewall ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.280760 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.291439 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.291486 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggxrj\" (UniqueName: \"kubernetes.io/projected/ba95f90e-9162-425c-9ac3-d655ea43cfa0-kube-api-access-ggxrj\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.291532 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.291554 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.294803 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.307877 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.319259 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.337639 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.344464 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.344494 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.344504 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.344521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.344532 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.356627 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.369262 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.388272 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.392185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.392266 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.392307 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.392362 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggxrj\" (UniqueName: \"kubernetes.io/projected/ba95f90e-9162-425c-9ac3-d655ea43cfa0-kube-api-access-ggxrj\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.392993 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.393140 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.400998 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ba95f90e-9162-425c-9ac3-d655ea43cfa0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.403604 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.418675 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.422028 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggxrj\" (UniqueName: \"kubernetes.io/projected/ba95f90e-9162-425c-9ac3-d655ea43cfa0-kube-api-access-ggxrj\") pod \"ovnkube-control-plane-749d76644c-nxpzm\" (UID: \"ba95f90e-9162-425c-9ac3-d655ea43cfa0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.434115 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/1.log" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.434866 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.438252 4947 scope.go:117] "RemoveContainer" containerID="911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e" Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.438560 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.446800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.446844 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.446860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.446882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.446903 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.449161 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.461805 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.489324 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.505167 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.515360 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.523221 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.541960 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.549770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.549836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.549851 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.549877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.549895 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.561777 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.585803 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.600529 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.610386 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.622376 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.640530 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.652804 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.652838 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.652849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.652864 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.652877 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.654776 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.668798 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.684407 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:55Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.756243 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.756290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.756308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.756329 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.756344 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.796605 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.796700 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.796736 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.796761 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.796788 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.796869 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.796957 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:11.796931291 +0000 UTC m=+51.029921731 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797000 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797031 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797039 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797050 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797067 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797086 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797071 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797045 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:11.797033574 +0000 UTC m=+51.030024014 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797228 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:11.797211118 +0000 UTC m=+51.030201568 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797244 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:10:11.797236079 +0000 UTC m=+51.030226529 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:09:55 crc kubenswrapper[4947]: E0125 00:09:55.797258 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:11.797251829 +0000 UTC m=+51.030242289 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.861116 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.861216 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.861235 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.861265 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.861284 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.965517 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.965585 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.965605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.965634 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:55 crc kubenswrapper[4947]: I0125 00:09:55.965655 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:55Z","lastTransitionTime":"2026-01-25T00:09:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.046175 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 16:25:33.176400424 +0000 UTC Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.069047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.069149 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.069168 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.069193 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.069211 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.089262 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.089304 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.089262 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.089638 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.089744 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.090016 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.172177 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.172234 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.172250 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.172273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.172291 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.274840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.274903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.274926 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.274958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.274983 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.378657 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.378737 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.378759 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.378791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.378812 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.443846 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" event={"ID":"ba95f90e-9162-425c-9ac3-d655ea43cfa0","Type":"ContainerStarted","Data":"889a90c35956bc31d77a3e867b33cc7ecd8b4e51803d12e8af7e0db44b30b659"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.482642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.482714 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.482741 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.482772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.482791 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.587295 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.587378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.587395 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.587420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.587440 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.690854 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.690922 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.690943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.690970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.690988 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.731675 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hj7kb"] Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.732759 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.733082 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.752576 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.772991 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.793761 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.794764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.794824 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.794845 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.794873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.794894 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.807791 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.807874 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbvt\" (UniqueName: \"kubernetes.io/projected/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-kube-api-access-qxbvt\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.809766 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.840539 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.859056 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.879192 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.898167 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.898250 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.898276 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.898314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.898337 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:56Z","lastTransitionTime":"2026-01-25T00:09:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.900989 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.909001 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.909085 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxbvt\" (UniqueName: \"kubernetes.io/projected/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-kube-api-access-qxbvt\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.909772 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:56 crc kubenswrapper[4947]: E0125 00:09:56.909869 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:57.409843562 +0000 UTC m=+36.642834032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.929784 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.942460 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxbvt\" (UniqueName: \"kubernetes.io/projected/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-kube-api-access-qxbvt\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.948767 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:56 crc kubenswrapper[4947]: I0125 00:09:56.976635 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.001398 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:56Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.002348 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.002408 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.002426 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.002451 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.002469 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.017973 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:57Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.038558 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:57Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.047081 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:01:29.90063808 +0000 UTC Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.057559 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:57Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.076229 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:57Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.106105 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.106194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.106260 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.106285 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.106303 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.209465 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.209550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.209571 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.209599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.209621 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.312517 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.312578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.312596 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.312624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.312645 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.414818 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:57 crc kubenswrapper[4947]: E0125 00:09:57.415009 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:57 crc kubenswrapper[4947]: E0125 00:09:57.415095 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:09:58.415075455 +0000 UTC m=+37.648065895 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.415712 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.415814 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.415836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.415862 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.415879 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.521349 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.521390 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.521404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.521421 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.521433 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.624492 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.624541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.624552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.624570 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.624581 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.727564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.727658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.727681 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.727754 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.727818 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.831251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.831338 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.831359 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.831388 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.831407 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.935886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.936720 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.936853 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.937181 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:57 crc kubenswrapper[4947]: I0125 00:09:57.937377 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:57Z","lastTransitionTime":"2026-01-25T00:09:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.041255 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.041607 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.041755 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.041955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.042212 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.047757 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 09:19:35.007536696 +0000 UTC Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.089315 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.089322 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.089884 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.089352 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.089355 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.090281 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.090527 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.090795 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.145586 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.145653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.145672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.145702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.145723 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.249257 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.249380 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.249430 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.249468 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.249503 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.352656 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.352712 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.352730 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.352754 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.352772 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.428942 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.429240 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:58 crc kubenswrapper[4947]: E0125 00:09:58.429348 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:00.429320432 +0000 UTC m=+39.662310922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.454820 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" event={"ID":"ba95f90e-9162-425c-9ac3-d655ea43cfa0","Type":"ContainerStarted","Data":"33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.454927 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" event={"ID":"ba95f90e-9162-425c-9ac3-d655ea43cfa0","Type":"ContainerStarted","Data":"6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.455988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.456029 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.456045 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.456065 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.456084 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.472558 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.497058 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.517581 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.544212 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.559266 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.560892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.560940 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.560953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.560976 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.560994 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.583859 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.604197 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.624749 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.650006 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.664604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.664877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.665022 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.665209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.665373 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.674383 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.689492 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.705668 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.721893 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.744482 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.762080 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.767837 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.767958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.767981 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.768017 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.768040 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.781895 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:58Z is after 2025-08-24T17:21:41Z" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.871844 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.871919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.871943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.872081 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.872218 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.976018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.976087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.976109 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.976181 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:58 crc kubenswrapper[4947]: I0125 00:09:58.976214 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:58Z","lastTransitionTime":"2026-01-25T00:09:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.048540 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 20:37:32.086168386 +0000 UTC Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.080889 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.080960 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.080982 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.081015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.081034 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.184274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.184347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.184366 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.184394 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.184413 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.288309 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.288367 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.288378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.288400 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.288413 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.391672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.391770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.391795 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.391831 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.391862 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.495796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.495899 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.495934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.495971 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.495996 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.599832 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.599915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.599937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.599968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.599988 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.703239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.703321 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.703342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.703372 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.703394 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.807221 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.807275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.807288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.807310 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.807324 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.910580 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.910647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.910666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.910692 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:09:59 crc kubenswrapper[4947]: I0125 00:09:59.910710 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:09:59Z","lastTransitionTime":"2026-01-25T00:09:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.014550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.014624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.014642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.014670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.014687 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.049412 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 11:43:10.247858341 +0000 UTC Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.088975 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.089019 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.089272 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.089118 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.089519 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.089889 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.089926 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.090119 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.117525 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.117628 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.117650 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.117683 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.117704 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.221446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.221528 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.221545 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.221574 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.221592 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.325416 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.325509 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.325536 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.325576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.325605 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.429463 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.429543 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.429566 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.429597 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.429617 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.455189 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.455444 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:00 crc kubenswrapper[4947]: E0125 00:10:00.455579 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:04.455543082 +0000 UTC m=+43.688533552 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.534541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.534604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.534623 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.534651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.534671 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.637963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.638029 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.638048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.638074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.638092 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.742624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.742713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.742742 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.742776 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.742805 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.847624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.847694 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.847715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.847741 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.847759 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.951622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.951695 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.951716 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.951742 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:00 crc kubenswrapper[4947]: I0125 00:10:00.951758 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:00Z","lastTransitionTime":"2026-01-25T00:10:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.050203 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 04:52:09.081771522 +0000 UTC Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.055406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.055483 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.055513 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.055667 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.055710 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.111494 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.132965 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.151660 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.159261 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.159366 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.159396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.159435 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.159463 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.181719 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.211018 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.235049 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.252592 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.263812 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.263857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.263877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.263904 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.263923 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.270294 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.296435 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.332166 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.353262 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.367344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.367396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.367417 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.367442 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.367460 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.372228 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.393350 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.413886 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.431661 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.460532 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:01Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.470606 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.470685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.470709 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.470743 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.470771 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.574706 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.574802 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.574827 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.574860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.574881 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.678800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.678910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.678929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.678956 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.678974 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.783202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.783297 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.783317 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.783345 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.783363 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.887040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.887162 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.887191 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.887224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.887253 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.991888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.991961 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.991981 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.992008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:01 crc kubenswrapper[4947]: I0125 00:10:01.992029 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:01Z","lastTransitionTime":"2026-01-25T00:10:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.050915 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 04:30:25.774616097 +0000 UTC Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.089534 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.089632 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.089641 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.089575 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:02 crc kubenswrapper[4947]: E0125 00:10:02.089830 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:02 crc kubenswrapper[4947]: E0125 00:10:02.089981 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:02 crc kubenswrapper[4947]: E0125 00:10:02.090092 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:02 crc kubenswrapper[4947]: E0125 00:10:02.090304 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.099087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.099294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.099379 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.099487 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.099519 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.203838 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.203912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.203931 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.203958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.203979 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.308209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.308306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.308331 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.308364 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.308390 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.411797 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.411896 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.411918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.411944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.411993 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.514385 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.514461 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.514477 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.514499 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.514515 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.617803 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.617910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.617937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.617974 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.618004 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.721197 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.721271 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.721294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.721327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.721351 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.824941 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.825015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.825040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.825071 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.825093 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.928231 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.928309 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.928328 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.928353 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:02 crc kubenswrapper[4947]: I0125 00:10:02.928376 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:02Z","lastTransitionTime":"2026-01-25T00:10:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.031303 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.031392 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.031418 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.031452 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.031479 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.051546 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 09:10:43.30461461 +0000 UTC Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.135533 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.135597 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.135618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.135642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.135661 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.244082 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.244202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.244225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.244258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.244277 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.348672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.348781 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.348811 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.348851 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.348880 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.453397 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.453500 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.453523 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.453563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.453586 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.556547 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.556631 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.556659 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.556691 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.556714 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.660347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.660435 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.660459 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.660493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.660515 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.764415 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.764480 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.764498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.764526 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.764545 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.867530 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.867621 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.867640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.867670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.867691 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.971524 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.971590 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.971607 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.971634 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:03 crc kubenswrapper[4947]: I0125 00:10:03.971653 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:03Z","lastTransitionTime":"2026-01-25T00:10:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.052008 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:15:25.965877319 +0000 UTC Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.075745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.075824 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.075843 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.075875 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.075895 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.089204 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.089304 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.089305 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.089404 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.089418 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.089566 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.089693 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.089816 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.179830 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.179887 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.179901 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.179920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.179933 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.283647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.283721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.283741 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.283767 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.283785 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.358747 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.358812 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.358830 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.358855 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.358874 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.379849 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:04Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.384666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.384727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.384746 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.384772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.384789 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.405764 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:04Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.411374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.411441 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.411454 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.411475 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.411491 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.431816 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:04Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.437216 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.437384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.437471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.437578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.437665 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.461008 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:04Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.466055 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.466193 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.466226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.466264 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.466290 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.487520 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:04Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.487774 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.489831 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.489903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.489922 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.489949 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.489971 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.509707 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.509967 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:04 crc kubenswrapper[4947]: E0125 00:10:04.510078 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:12.510048022 +0000 UTC m=+51.743038502 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.593632 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.593968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.594189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.594396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.594621 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.698419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.698933 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.699198 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.699477 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.699911 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.804399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.804480 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.804498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.804529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.804554 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.907506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.907594 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.907620 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.907654 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:04 crc kubenswrapper[4947]: I0125 00:10:04.907673 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:04Z","lastTransitionTime":"2026-01-25T00:10:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.011639 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.011715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.011735 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.011765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.011785 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.053057 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 04:07:25.210671664 +0000 UTC Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.115212 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.115273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.115290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.115314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.115333 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.219464 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.219541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.219560 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.219591 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.219610 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.322478 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.322529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.322546 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.322572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.322590 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.426316 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.426422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.426444 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.426472 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.426489 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.529790 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.529876 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.529901 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.529935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.529958 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.633715 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.633796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.633823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.633852 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.633869 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.737951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.738013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.738028 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.738050 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.738061 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.842350 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.842417 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.842429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.842453 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.842470 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.945868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.945935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.945949 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.945968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:05 crc kubenswrapper[4947]: I0125 00:10:05.945981 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:05Z","lastTransitionTime":"2026-01-25T00:10:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.049637 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.049678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.049697 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.049722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.049738 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.054100 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 16:18:47.33146468 +0000 UTC Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.088749 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.088828 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.088771 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.088771 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:06 crc kubenswrapper[4947]: E0125 00:10:06.088973 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:06 crc kubenswrapper[4947]: E0125 00:10:06.089164 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:06 crc kubenswrapper[4947]: E0125 00:10:06.089278 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:06 crc kubenswrapper[4947]: E0125 00:10:06.089363 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.152919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.153008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.153030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.153064 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.153088 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.256892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.256963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.256985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.257014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.257033 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.360088 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.360175 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.360193 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.360218 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.360238 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.463470 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.463577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.463604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.463644 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.463670 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.567678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.567769 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.567796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.567832 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.567857 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.671737 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.672099 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.672331 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.672485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.672635 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.777678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.777773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.777787 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.777811 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.777823 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.880648 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.880685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.880696 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.880713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.880724 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.983562 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.983685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.983711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.983740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:06 crc kubenswrapper[4947]: I0125 00:10:06.983761 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:06Z","lastTransitionTime":"2026-01-25T00:10:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.054570 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 13:32:58.21086024 +0000 UTC Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.054929 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.067096 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.078692 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.087211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.087285 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.087308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.087342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.087360 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.090924 4947 scope.go:117] "RemoveContainer" containerID="911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.098838 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.121340 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.140981 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.163102 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.179495 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.190286 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.190614 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.190736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.190848 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.190945 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.199393 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.218203 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.232572 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.250707 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.270437 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.283067 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.295397 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.295483 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.295509 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.295541 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.295564 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.300909 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.316838 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.330704 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.359716 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:07Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.398639 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.398702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.398719 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.398744 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.398766 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.501700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.502178 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.502432 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.502688 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.502903 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.605967 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.606036 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.606060 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.606091 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.606114 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.709768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.710156 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.710172 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.710193 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.710209 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.813230 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.813289 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.813302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.813324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.813341 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.917168 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.917236 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.917254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.917282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:07 crc kubenswrapper[4947]: I0125 00:10:07.917301 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:07Z","lastTransitionTime":"2026-01-25T00:10:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.020344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.020390 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.020404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.020428 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.020446 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.054749 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 21:24:55.305132547 +0000 UTC Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.089626 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.089702 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.089809 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:08 crc kubenswrapper[4947]: E0125 00:10:08.089802 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.089865 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:08 crc kubenswrapper[4947]: E0125 00:10:08.090400 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:08 crc kubenswrapper[4947]: E0125 00:10:08.090413 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:08 crc kubenswrapper[4947]: E0125 00:10:08.090484 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.123404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.123481 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.123502 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.123534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.123556 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.225888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.225946 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.225958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.225985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.226000 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.329467 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.329534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.329553 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.329576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.329591 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.433199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.433275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.433294 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.433324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.433347 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.505758 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/1.log" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.514471 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.514994 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.535625 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.536727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.536773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.536786 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.536805 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.536816 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.550358 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.563886 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.578120 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.592619 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.609784 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.629004 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.641428 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.641506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.641519 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.641539 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.641554 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.643239 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.656031 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.668214 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.686985 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.699501 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.715015 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.732284 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.744683 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.744753 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.744769 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.744800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.744825 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.751628 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.766606 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.801099 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:08Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.848288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.848344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.848355 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.848375 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.848384 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.951115 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.951266 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.951292 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.951333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:08 crc kubenswrapper[4947]: I0125 00:10:08.951360 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:08Z","lastTransitionTime":"2026-01-25T00:10:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.054873 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:32:57.283269971 +0000 UTC Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.054990 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.055064 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.055085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.055114 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.055162 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.158419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.158479 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.158496 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.158523 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.158542 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.261236 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.261302 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.261319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.261343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.261363 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.364419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.364500 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.364523 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.364552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.364578 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.467085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.467189 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.467213 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.467247 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.467272 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.522369 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/2.log" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.523530 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/1.log" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.527760 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" exitCode=1 Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.527863 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.528043 4947 scope.go:117] "RemoveContainer" containerID="911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.529369 4947 scope.go:117] "RemoveContainer" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" Jan 25 00:10:09 crc kubenswrapper[4947]: E0125 00:10:09.529851 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.553534 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.569574 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.569622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.569639 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.569670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.569686 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.583255 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.605668 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.623594 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.641166 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.666816 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.673170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.673225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.673242 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.673270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.673289 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.688393 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.706183 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.725420 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.745927 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.776576 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.779397 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.779452 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.779472 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.779502 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.779521 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.801511 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.822729 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.857278 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://911b880cf46f524af0246de63d7a01c14c9431d20d77b03f642d36b3a8cd5e2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:09:53Z\\\",\\\"message\\\":\\\"]} name:Service_openshift-ingress/router-internal-default_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.176:1936: 10.217.4.176:443: 10.217.4.176:80:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {595f6e90-7cd8-4871-85ab-9519d3c9c3e5}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0125 00:09:53.330571 6392 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:09:53Z is after 2025-08-24T17:21:41Z]\\\\nI0125 00:09:53.330579 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.879403 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.883319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.883389 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.883406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.883433 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.883451 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.907851 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.924368 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:09Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.987656 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.987716 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.987739 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.987810 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:09 crc kubenswrapper[4947]: I0125 00:10:09.987837 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:09Z","lastTransitionTime":"2026-01-25T00:10:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.055973 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 14:58:58.27375202 +0000 UTC Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.088916 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.088916 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.088997 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.089423 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:10 crc kubenswrapper[4947]: E0125 00:10:10.089681 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:10 crc kubenswrapper[4947]: E0125 00:10:10.089918 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:10 crc kubenswrapper[4947]: E0125 00:10:10.090175 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:10 crc kubenswrapper[4947]: E0125 00:10:10.090412 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.091057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.091112 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.091168 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.091199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.091220 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.195299 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.195372 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.195391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.195422 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.195441 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.298552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.298609 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.298627 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.298651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.298669 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.402493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.402559 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.402577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.402604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.402621 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.505849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.505915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.505934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.505959 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.505977 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.535612 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/2.log" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.540454 4947 scope.go:117] "RemoveContainer" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" Jan 25 00:10:10 crc kubenswrapper[4947]: E0125 00:10:10.540648 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.563431 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.584565 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.603953 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.609424 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.609469 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.609485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.609509 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.609527 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.624335 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.683355 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.697162 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.706893 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.711844 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.711959 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.711975 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.711994 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.712006 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.718003 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.732746 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.744086 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.755794 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.774680 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.790634 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.802863 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.815086 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.815150 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.815163 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.815183 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.815197 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.818046 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.828861 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.851021 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:10Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.918450 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.918498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.918518 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.918543 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:10 crc kubenswrapper[4947]: I0125 00:10:10.918561 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:10Z","lastTransitionTime":"2026-01-25T00:10:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.022099 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.022224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.022242 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.022270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.022290 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.057068 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 00:17:21.542198067 +0000 UTC Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.111909 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.125772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.125836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.125856 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.125884 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.125903 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.134044 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.152373 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.189094 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.209911 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.225583 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.229063 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.229159 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.229180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.229211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.229229 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.241697 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.259620 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.278242 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.297285 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.323522 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.332801 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.333039 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.333219 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.333391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.333510 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.340830 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.361275 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.376776 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.396871 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.416960 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.433812 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:11Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.436249 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.436309 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.436329 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.436546 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.436566 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.540610 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.540671 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.540685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.540708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.540749 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.644251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.644296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.644312 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.644340 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.644357 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.747784 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.747851 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.747870 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.747896 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.747914 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.797930 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.798057 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.798109 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.798191 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798249 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:10:43.798204955 +0000 UTC m=+83.031195435 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.798309 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798348 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798359 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798508 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:43.798474311 +0000 UTC m=+83.031464791 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798544 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:43.798526712 +0000 UTC m=+83.031517192 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798553 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798592 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798617 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798706 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:43.798684886 +0000 UTC m=+83.031675366 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798817 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798844 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798862 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:11 crc kubenswrapper[4947]: E0125 00:10:11.798929 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:43.798910451 +0000 UTC m=+83.031900941 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.852919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.852999 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.853016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.853044 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.853083 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.956199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.956267 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.956286 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.956311 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:11 crc kubenswrapper[4947]: I0125 00:10:11.956332 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:11Z","lastTransitionTime":"2026-01-25T00:10:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.057608 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:36:42.729751033 +0000 UTC Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.060560 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.060625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.060646 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.060672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.060688 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.089095 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.089096 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.089190 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.089216 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.089162 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.089374 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.089497 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.089656 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.163881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.163947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.163971 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.164003 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.164023 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.267190 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.267268 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.267286 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.267335 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.267352 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.371023 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.371083 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.371101 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.371154 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.371174 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.474051 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.474169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.474205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.474237 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.474257 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.578673 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.578757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.578779 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.578806 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.578825 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.607768 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.608010 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:12 crc kubenswrapper[4947]: E0125 00:10:12.608184 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:10:28.608107837 +0000 UTC m=+67.841098307 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.682353 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.682421 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.682437 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.682462 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.682480 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.785670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.785741 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.785765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.785798 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.785837 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.889483 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.889572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.889643 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.889684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.889710 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.992832 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.992912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.992943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.993044 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:12 crc kubenswrapper[4947]: I0125 00:10:12.993063 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:12Z","lastTransitionTime":"2026-01-25T00:10:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.058638 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 07:34:13.141633163 +0000 UTC Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.095279 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.095330 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.095339 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.095354 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.095366 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.198947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.199013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.199024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.199047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.199064 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.302468 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.302912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.302973 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.303009 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.303033 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.406535 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.406620 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.406640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.406668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.406687 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.509713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.509791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.509817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.509850 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.509872 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.613599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.613678 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.613700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.613730 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.613752 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.717818 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.717897 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.717920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.717947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.717965 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.821761 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.821837 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.821862 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.821895 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.821915 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.925939 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.926002 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.926020 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.926048 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:13 crc kubenswrapper[4947]: I0125 00:10:13.926065 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:13Z","lastTransitionTime":"2026-01-25T00:10:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.030288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.030349 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.030366 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.030391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.030409 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.058910 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:00:18.730851083 +0000 UTC Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.089511 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.089591 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.089528 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.089528 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.089717 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.089924 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.090051 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.090201 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.133878 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.133936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.133959 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.133988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.134009 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.237037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.237102 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.237121 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.237188 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.237211 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.340647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.340712 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.340731 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.340757 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.340775 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.443649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.443722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.443740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.444089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.444118 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.547805 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.547858 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.547876 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.547903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.547921 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.651237 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.651664 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.651849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.652044 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.652261 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.756083 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.756449 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.756814 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.756976 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.757109 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.837240 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.837301 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.837320 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.837347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.837368 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.860550 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:14Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.867037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.867095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.867118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.867184 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.867209 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.890951 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:14Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.897150 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.897208 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.897226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.897251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.897269 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.921623 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:14Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.928860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.928936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.928970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.929004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.929029 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.952808 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:14Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.958258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.958642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.958775 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.959089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.959264 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.981617 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:14Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:14 crc kubenswrapper[4947]: E0125 00:10:14.981842 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.983769 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.983817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.983835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.983859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:14 crc kubenswrapper[4947]: I0125 00:10:14.983878 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:14Z","lastTransitionTime":"2026-01-25T00:10:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.060011 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:04:14.471442865 +0000 UTC Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.087086 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.087217 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.087245 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.087274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.087298 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.190809 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.190887 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.190906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.190964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.190983 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.294503 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.294572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.294584 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.294605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.294619 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.397887 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.397968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.397987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.398024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.398043 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.501476 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.501531 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.501544 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.501564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.501579 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.605477 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.605558 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.605576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.605605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.605626 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.709025 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.709101 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.709118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.709177 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.709199 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.813324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.813415 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.813442 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.813474 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.813496 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.917254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.917316 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.917332 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.917354 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:15 crc kubenswrapper[4947]: I0125 00:10:15.917370 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:15Z","lastTransitionTime":"2026-01-25T00:10:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.021558 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.021629 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.021647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.021676 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.021695 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.060641 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 15:47:35.936331665 +0000 UTC Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.089361 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.089412 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:16 crc kubenswrapper[4947]: E0125 00:10:16.089569 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.089594 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.089673 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:16 crc kubenswrapper[4947]: E0125 00:10:16.089814 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:16 crc kubenswrapper[4947]: E0125 00:10:16.089902 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:16 crc kubenswrapper[4947]: E0125 00:10:16.090109 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.125882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.125953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.125976 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.126004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.126023 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.230496 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.230577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.230602 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.230635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.230658 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.334758 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.334823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.334835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.334857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.334872 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.438667 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.438724 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.438734 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.438749 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.438759 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.542362 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.542441 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.542466 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.542506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.542530 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.646622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.646700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.646721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.646747 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.646767 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.751042 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.751174 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.751202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.751226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.751243 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.855671 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.855762 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.855786 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.855820 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.855845 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.958953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.959033 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.959057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.959089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:16 crc kubenswrapper[4947]: I0125 00:10:16.959107 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:16Z","lastTransitionTime":"2026-01-25T00:10:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.061315 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 17:16:34.459499196 +0000 UTC Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.063305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.063372 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.063397 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.063427 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.063448 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.167098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.167243 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.167300 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.167330 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.167390 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.270944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.271078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.271180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.271222 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.271285 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.375274 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.375349 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.375373 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.375404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.375429 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.478909 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.479384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.479648 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.479843 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.480074 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.583599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.583666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.583684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.583712 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.583732 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.687713 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.687785 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.687802 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.687828 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.687846 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.790915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.790963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.790983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.791008 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.791026 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.894471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.894522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.894538 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.894562 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.894579 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.998684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.998748 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.998765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.998794 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:17 crc kubenswrapper[4947]: I0125 00:10:17.998812 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:17Z","lastTransitionTime":"2026-01-25T00:10:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.061875 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:51:42.360592676 +0000 UTC Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.088636 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.088798 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.088864 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:18 crc kubenswrapper[4947]: E0125 00:10:18.088868 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.088927 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:18 crc kubenswrapper[4947]: E0125 00:10:18.089118 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:18 crc kubenswrapper[4947]: E0125 00:10:18.089294 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:18 crc kubenswrapper[4947]: E0125 00:10:18.089470 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.102022 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.102105 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.102119 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.102192 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.102207 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.205114 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.205254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.205279 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.205310 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.205333 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.308743 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.308792 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.308808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.308826 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.308837 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.412306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.412370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.412382 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.412405 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.412418 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.516054 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.516116 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.516128 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.516150 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.516165 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.620610 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.620674 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.620690 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.620711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.620726 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.724583 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.724647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.724660 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.724682 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.724696 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.828254 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.828343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.828356 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.828374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.828388 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.931912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.931989 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.932007 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.932034 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:18 crc kubenswrapper[4947]: I0125 00:10:18.932053 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:18Z","lastTransitionTime":"2026-01-25T00:10:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.036282 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.036356 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.036374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.036404 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.036429 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.062837 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 18:37:00.641101896 +0000 UTC Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.139576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.139640 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.139660 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.139687 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.139709 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.243327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.243402 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.243420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.243450 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.243471 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.345859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.345914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.345933 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.345955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.345972 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.449196 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.449273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.449296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.449324 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.449343 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.552372 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.552445 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.552468 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.552496 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.552519 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.655685 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.655753 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.655779 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.655811 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.655835 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.758472 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.758545 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.758563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.758595 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.758614 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.862566 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.862624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.862645 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.862670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.862689 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.965951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.966024 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.966043 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.966079 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:19 crc kubenswrapper[4947]: I0125 00:10:19.966098 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:19Z","lastTransitionTime":"2026-01-25T00:10:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.063867 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 04:06:48.660332066 +0000 UTC Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.069573 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.069647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.069672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.069704 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.069722 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.089245 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.089343 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.089343 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.089503 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:20 crc kubenswrapper[4947]: E0125 00:10:20.089498 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:20 crc kubenswrapper[4947]: E0125 00:10:20.089655 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:20 crc kubenswrapper[4947]: E0125 00:10:20.090391 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:20 crc kubenswrapper[4947]: E0125 00:10:20.090575 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.172316 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.172383 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.172406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.172442 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.172465 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.275783 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.275859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.275889 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.275920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.275942 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.379214 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.379271 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.379293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.379323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.379346 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.488449 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.488569 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.488594 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.488635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.488654 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.592466 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.592521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.592538 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.592565 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.592584 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.696347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.696425 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.696448 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.696511 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.696533 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.800530 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.800578 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.800599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.800625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.800646 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.904653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.904734 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.904756 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.904788 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:20 crc kubenswrapper[4947]: I0125 00:10:20.904810 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:20Z","lastTransitionTime":"2026-01-25T00:10:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.007936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.007997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.008014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.008040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.008058 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.064216 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:54:20.741350199 +0000 UTC Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.108596 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.110405 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.110467 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.110486 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.110512 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.110531 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.132447 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.145399 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.168681 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.186841 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.208165 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.212950 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.213005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.213025 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.213050 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.213068 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.229282 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.244105 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.259623 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.281001 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.301818 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.315957 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.316018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.316038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.316065 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.316085 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.319360 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.340214 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.357372 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.376920 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.394622 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.413088 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:21Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.418858 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.418938 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.418966 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.418998 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.419018 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.522109 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.522216 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.522241 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.522273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.522296 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.625090 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.625187 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.625207 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.625270 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.625289 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.728583 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.728878 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.729046 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.729226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.729440 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.833568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.833637 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.833656 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.833682 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.833701 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.936873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.936944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.936967 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.936993 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:21 crc kubenswrapper[4947]: I0125 00:10:21.937011 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:21Z","lastTransitionTime":"2026-01-25T00:10:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.040481 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.040551 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.040570 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.040599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.040618 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.065317 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:11:27.889785442 +0000 UTC Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.089093 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.089179 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.089188 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:22 crc kubenswrapper[4947]: E0125 00:10:22.089262 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:22 crc kubenswrapper[4947]: E0125 00:10:22.089401 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:22 crc kubenswrapper[4947]: E0125 00:10:22.089645 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.089706 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:22 crc kubenswrapper[4947]: E0125 00:10:22.089768 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.143775 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.143848 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.143866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.143894 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.143912 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.247805 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.247866 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.247886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.247916 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.247939 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.351744 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.351810 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.351833 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.351863 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.351888 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.455670 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.455742 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.455764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.455793 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.455814 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.559011 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.559088 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.559108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.559201 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.559237 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.662647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.662708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.662740 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.662765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.662788 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.765646 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.765707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.765726 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.765753 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.765771 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.869953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.870010 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.870027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.870051 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.870070 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.975452 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.975512 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.975520 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.975561 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:22 crc kubenswrapper[4947]: I0125 00:10:22.975572 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:22Z","lastTransitionTime":"2026-01-25T00:10:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.066157 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:28:38.960613562 +0000 UTC Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.078697 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.078766 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.078785 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.078814 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.078833 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.182577 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.182660 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.182679 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.182707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.182729 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.286849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.286922 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.286953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.286987 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.287005 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.390852 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.390935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.390951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.390979 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.391002 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.494497 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.494570 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.494587 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.494618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.494634 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.597938 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.598019 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.598043 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.598072 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.598093 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.701621 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.701711 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.701736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.701774 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.701799 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.806049 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.806186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.806207 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.806246 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.806274 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.909508 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.909635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.909667 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.909695 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:23 crc kubenswrapper[4947]: I0125 00:10:23.909714 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:23Z","lastTransitionTime":"2026-01-25T00:10:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.012799 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.012877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.012902 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.012933 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.012957 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.067117 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 12:11:50.657964341 +0000 UTC Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.089636 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.089748 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.089779 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:24 crc kubenswrapper[4947]: E0125 00:10:24.089857 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:24 crc kubenswrapper[4947]: E0125 00:10:24.090080 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.090242 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.090668 4947 scope.go:117] "RemoveContainer" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" Jan 25 00:10:24 crc kubenswrapper[4947]: E0125 00:10:24.090942 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:24 crc kubenswrapper[4947]: E0125 00:10:24.091048 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:24 crc kubenswrapper[4947]: E0125 00:10:24.091171 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.115934 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.115988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.116005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.116028 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.116041 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.219558 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.219625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.219646 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.219671 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.219690 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.323041 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.323118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.323170 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.323199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.323217 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.426521 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.426564 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.426574 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.426598 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.426609 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.529929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.530002 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.530026 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.530054 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.530075 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.632953 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.633025 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.633039 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.633073 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.633091 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.736768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.736817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.736829 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.736850 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.736863 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.840774 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.841337 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.841594 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.841827 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.842041 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.945372 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.945431 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.945460 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.945478 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:24 crc kubenswrapper[4947]: I0125 00:10:24.945487 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:24Z","lastTransitionTime":"2026-01-25T00:10:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.048952 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.049014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.049040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.049072 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.049094 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.067669 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:54:36.444008856 +0000 UTC Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.152498 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.152563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.152587 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.152617 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.152641 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.195505 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.195568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.195580 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.195604 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.195614 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.214067 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:25Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.226090 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.226465 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.226609 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.226766 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.226911 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.239871 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:25Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.244445 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.244480 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.244488 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.244503 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.244513 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.263609 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:25Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.268312 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.268374 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.268387 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.268408 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.268421 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.282852 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:25Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.287288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.287434 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.287522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.287605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.287686 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.305011 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:25Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:25 crc kubenswrapper[4947]: E0125 00:10:25.305331 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.307471 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.307520 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.307534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.307556 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.307568 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.410747 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.410809 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.410820 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.410855 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.410867 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.513896 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.513949 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.513960 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.513983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.513993 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.617424 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.617538 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.617551 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.617568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.617594 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.721529 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.721601 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.721620 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.721648 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.721666 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.825205 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.825277 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.825296 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.825323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.825343 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.930524 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.930847 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.931419 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.932716 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:25 crc kubenswrapper[4947]: I0125 00:10:25.933572 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:25Z","lastTransitionTime":"2026-01-25T00:10:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.036774 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.037479 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.037727 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.037910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.038088 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.068691 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 17:48:17.847082923 +0000 UTC Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.089674 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.089688 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.089712 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.089718 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:26 crc kubenswrapper[4947]: E0125 00:10:26.090266 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:26 crc kubenswrapper[4947]: E0125 00:10:26.090105 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:26 crc kubenswrapper[4947]: E0125 00:10:26.090667 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:26 crc kubenswrapper[4947]: E0125 00:10:26.090829 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.140889 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.141266 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.141417 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.141607 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.141785 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.245817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.245885 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.245903 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.245929 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.245950 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.349725 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.349770 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.349782 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.349800 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.349814 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.453090 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.453156 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.453169 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.453187 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.453204 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.556407 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.556639 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.556733 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.556809 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.556872 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.659811 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.659868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.659886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.659906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.659919 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.763209 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.763251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.763262 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.763279 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.763290 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.866766 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.866835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.866855 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.866881 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.866899 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.969748 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.969794 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.969807 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.969828 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:26 crc kubenswrapper[4947]: I0125 00:10:26.969842 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:26Z","lastTransitionTime":"2026-01-25T00:10:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.070798 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:34:29.54738216 +0000 UTC Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.072990 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.073042 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.073058 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.073078 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.073091 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.175702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.175778 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.175797 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.175823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.175838 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.278940 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.279018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.279037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.279065 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.279087 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.382427 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.382475 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.382488 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.382506 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.382518 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.485636 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.485684 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.485699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.485719 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.485732 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.588368 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.588428 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.588453 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.588481 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.588504 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.690226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.690307 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.690322 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.690337 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.690348 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.792213 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.792251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.792264 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.792280 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.792291 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.894947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.894986 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.894998 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.895013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.895026 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.997833 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.997882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.997892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.997910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:27 crc kubenswrapper[4947]: I0125 00:10:27.997923 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:27Z","lastTransitionTime":"2026-01-25T00:10:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.071551 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 00:33:57.31110085 +0000 UTC Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.089321 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.089517 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.090358 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.090440 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.090486 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.090698 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.090835 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.090957 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.100936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.100979 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.101003 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.101030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.101050 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.204188 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.204233 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.204251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.204276 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.204294 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.307161 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.307235 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.307258 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.307290 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.307316 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.411014 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.411075 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.411093 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.411118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.411698 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.514332 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.514409 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.514428 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.514453 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.514470 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.617333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.617398 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.617418 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.617446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.617473 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.701566 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.701829 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:28 crc kubenswrapper[4947]: E0125 00:10:28.701939 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:00.701910097 +0000 UTC m=+99.934900567 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.721010 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.721074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.721092 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.721117 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.721161 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.823653 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.823744 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.823758 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.823782 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.823798 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.926244 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.926291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.926327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.926346 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:28 crc kubenswrapper[4947]: I0125 00:10:28.926364 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:28Z","lastTransitionTime":"2026-01-25T00:10:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.028706 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.028772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.028797 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.028825 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.028844 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.072434 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 13:15:55.892322349 +0000 UTC Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.131239 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.131342 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.131373 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.131409 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.131434 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.235278 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.235331 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.235343 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.235360 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.235372 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.338219 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.338293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.338314 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.338387 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.338439 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.441970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.442030 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.442049 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.442076 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.442092 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.544392 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.544429 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.544438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.544454 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.544463 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.647504 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.647570 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.647583 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.647605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.647623 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.749893 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.750004 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.750017 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.750035 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.750046 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.852888 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.852931 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.852939 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.852955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.852965 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.955466 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.955518 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.955530 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.955550 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:29 crc kubenswrapper[4947]: I0125 00:10:29.955562 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:29Z","lastTransitionTime":"2026-01-25T00:10:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.059352 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.059707 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.059895 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.060027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.060193 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.073214 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 08:15:14.048557083 +0000 UTC Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.089742 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.089821 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.089960 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:30 crc kubenswrapper[4947]: E0125 00:10:30.089960 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.090010 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:30 crc kubenswrapper[4947]: E0125 00:10:30.090194 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:30 crc kubenswrapper[4947]: E0125 00:10:30.090306 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:30 crc kubenswrapper[4947]: E0125 00:10:30.090382 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.162964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.163015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.163032 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.163057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.163074 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.266171 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.266212 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.266225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.266244 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.266257 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.369223 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.369253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.369261 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.369275 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.369285 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.472036 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.472085 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.472098 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.472116 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.472150 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.575884 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.575959 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.575978 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.576003 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.576020 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.679236 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.679291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.679306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.679328 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.679345 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.782737 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.782813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.782835 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.782894 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.782911 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.886412 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.886522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.886543 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.886571 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.886598 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.989253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.989321 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.989339 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.989365 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:30 crc kubenswrapper[4947]: I0125 00:10:30.989387 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:30Z","lastTransitionTime":"2026-01-25T00:10:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.074254 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 13:07:41.061589687 +0000 UTC Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.091161 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.091185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.091194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.091210 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.091220 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.106014 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.126719 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.139333 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.160209 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.173692 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.186807 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.193743 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.193803 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.193821 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.193849 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.193868 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.199478 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.217604 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.230286 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.243362 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.258856 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.274515 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.289456 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.296224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.296268 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.296280 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.296297 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.296308 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.302211 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.314686 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.329363 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.342072 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.401295 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.401346 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.401358 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.401384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.401406 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.504041 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.504086 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.504097 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.504117 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.504168 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.656728 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.656763 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.656772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.656786 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.656795 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.660911 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/0.log" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.661019 4947 generic.go:334] "Generic (PLEG): container finished" podID="2d914454-2c17-47f2-aa53-aba3bfaad296" containerID="e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656" exitCode=1 Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.661081 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerDied","Data":"e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.661838 4947 scope.go:117] "RemoveContainer" containerID="e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.680732 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.699371 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.712582 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.729792 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.741412 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.760440 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.760507 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.760527 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.760958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.760999 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.762291 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.780103 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.795911 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.806708 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.819913 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.836796 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.852573 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:31Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:30Z\\\",\\\"message\\\":\\\"2026-01-25T00:09:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf\\\\n2026-01-25T00:09:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf to /host/opt/cni/bin/\\\\n2026-01-25T00:09:45Z [verbose] multus-daemon started\\\\n2026-01-25T00:09:45Z [verbose] Readiness Indicator file check\\\\n2026-01-25T00:10:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.861670 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.863927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.863985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.864000 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.864018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.864028 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.874562 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.891624 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.913885 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.931509 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:31Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.967029 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.967103 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.967113 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.967149 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:31 crc kubenswrapper[4947]: I0125 00:10:31.967162 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:31Z","lastTransitionTime":"2026-01-25T00:10:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.070531 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.070589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.070606 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.070630 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.070647 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.075009 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 09:46:40.370629733 +0000 UTC Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.089485 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:32 crc kubenswrapper[4947]: E0125 00:10:32.089688 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.089949 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:32 crc kubenswrapper[4947]: E0125 00:10:32.090054 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.090326 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.090400 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:32 crc kubenswrapper[4947]: E0125 00:10:32.090489 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:32 crc kubenswrapper[4947]: E0125 00:10:32.090573 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.174234 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.174298 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.174308 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.174337 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.174348 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.276892 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.276962 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.276984 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.277019 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.277044 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.384747 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.384819 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.384838 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.384868 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.384887 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.487672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.487718 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.487731 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.487751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.487766 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.590828 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.590920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.590937 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.590963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.590984 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.667487 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/0.log" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.667575 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerStarted","Data":"6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.686813 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.694303 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.694355 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.694369 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.694385 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.694411 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.704980 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.719986 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.740332 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.758017 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:30Z\\\",\\\"message\\\":\\\"2026-01-25T00:09:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf\\\\n2026-01-25T00:09:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf to /host/opt/cni/bin/\\\\n2026-01-25T00:09:45Z [verbose] multus-daemon started\\\\n2026-01-25T00:09:45Z [verbose] Readiness Indicator file check\\\\n2026-01-25T00:10:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.769728 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.782043 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.793990 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.812968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.813038 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.813061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.813094 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.813117 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.813175 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.825850 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.841209 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.854331 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.868257 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.882233 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.895997 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.915469 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.915494 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.915502 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.915516 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.915526 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:32Z","lastTransitionTime":"2026-01-25T00:10:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.924491 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:32 crc kubenswrapper[4947]: I0125 00:10:32.941673 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:32Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.017923 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.017967 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.017978 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.017993 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.018003 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.075773 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 16:10:44.942338938 +0000 UTC Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.120647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.120690 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.120704 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.120720 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.120734 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.223719 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.223773 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.223786 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.223804 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.223815 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.326897 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.326970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.326981 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.327001 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.327012 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.429011 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.429057 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.429069 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.429089 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.429101 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.530988 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.531015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.531026 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.531040 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.531051 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.634512 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.634572 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.634596 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.634627 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.634650 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.737853 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.737926 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.737947 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.737977 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.737999 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.840689 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.840736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.840747 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.840768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.840780 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.943145 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.943183 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.943194 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.943211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:33 crc kubenswrapper[4947]: I0125 00:10:33.943222 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:33Z","lastTransitionTime":"2026-01-25T00:10:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.046552 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.046592 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.046602 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.046618 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.046630 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.076353 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:12:56.414294699 +0000 UTC Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.088846 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.088957 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.088856 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.088837 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:34 crc kubenswrapper[4947]: E0125 00:10:34.089067 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:34 crc kubenswrapper[4947]: E0125 00:10:34.089221 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:34 crc kubenswrapper[4947]: E0125 00:10:34.089378 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:34 crc kubenswrapper[4947]: E0125 00:10:34.089501 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.149793 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.149865 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.149882 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.149909 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.149928 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.253895 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.253973 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.254002 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.254039 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.254063 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.356877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.357185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.357384 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.357590 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.357746 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.460213 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.460277 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.460295 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.460322 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.460339 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.562954 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.563028 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.563047 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.563074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.563092 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.665662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.665704 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.665716 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.665736 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.665747 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.769262 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.769340 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.769378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.769413 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.769437 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.872554 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.872630 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.872662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.872694 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.872716 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.975592 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.975658 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.975677 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.975700 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:34 crc kubenswrapper[4947]: I0125 00:10:34.975719 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:34Z","lastTransitionTime":"2026-01-25T00:10:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.076912 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 04:48:47.543066077 +0000 UTC Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.082731 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.082791 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.082813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.082839 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.082857 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.185655 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.185722 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.185745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.185776 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.185802 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.288477 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.288549 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.288574 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.288605 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.288628 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.391890 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.391943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.391956 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.391977 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.391996 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.494772 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.494840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.494857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.494883 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.494902 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.535891 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.535951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.535968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.535994 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.536018 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.555983 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:35Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.560910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.560964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.560977 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.561000 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.561012 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.577788 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:35Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.582333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.582399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.582417 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.582444 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.582462 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.602954 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:35Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.607259 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.607315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.607334 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.607361 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.607379 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.622914 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:35Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.627708 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.627765 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.627777 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.627796 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.627811 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.641631 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a468ef55-66d7-4612-bf14-5eff54a3bf14\\\",\\\"systemUUID\\\":\\\"07b95270-97eb-4b89-897d-837b061280fd\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:35Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:35 crc kubenswrapper[4947]: E0125 00:10:35.641873 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.644142 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.644382 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.644400 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.644423 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.644441 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.751848 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.752064 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.752087 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.752253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.752277 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.856224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.856317 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.856369 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.856399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.856417 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.959930 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.959990 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.960006 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.960037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:35 crc kubenswrapper[4947]: I0125 00:10:35.960054 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:35Z","lastTransitionTime":"2026-01-25T00:10:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.062920 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.062989 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.063005 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.063026 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.063039 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.077673 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:17:56.934438845 +0000 UTC Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.089011 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.089166 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:36 crc kubenswrapper[4947]: E0125 00:10:36.089193 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.089247 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:36 crc kubenswrapper[4947]: E0125 00:10:36.089364 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.089702 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:36 crc kubenswrapper[4947]: E0125 00:10:36.089781 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:36 crc kubenswrapper[4947]: E0125 00:10:36.090036 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.090229 4947 scope.go:117] "RemoveContainer" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.166668 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.166745 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.166764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.166793 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.166817 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.269958 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.270016 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.270033 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.270053 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.270066 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.373140 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.373181 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.373191 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.373206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.373218 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.475935 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.475985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.475997 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.476015 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.476026 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.578821 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.578873 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.578891 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.578914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.578946 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.681358 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.681446 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.681798 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.682062 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.682167 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.682776 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/2.log" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.686712 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.698715 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.709230 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.722808 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.732666 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.748811 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.763661 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.776640 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.785174 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.785214 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.785225 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.785245 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.785256 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.790671 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.802041 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.813141 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.823020 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.837917 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.850117 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:30Z\\\",\\\"message\\\":\\\"2026-01-25T00:09:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf\\\\n2026-01-25T00:09:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf to /host/opt/cni/bin/\\\\n2026-01-25T00:09:45Z [verbose] multus-daemon started\\\\n2026-01-25T00:09:45Z [verbose] Readiness Indicator file check\\\\n2026-01-25T00:10:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.861525 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.872015 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.883190 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.888176 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.888230 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.888248 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.888273 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.888291 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.900571 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:36Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.991271 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.991313 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.991329 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.991347 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:36 crc kubenswrapper[4947]: I0125 00:10:36.991360 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:36Z","lastTransitionTime":"2026-01-25T00:10:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.078457 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 15:09:33.86890964 +0000 UTC Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.093612 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.093683 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.093701 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.093728 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.093746 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.196867 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.196910 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.196918 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.196943 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.196962 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.299812 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.299886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.299912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.299945 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.299970 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.403952 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.404037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.404061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.404097 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.404156 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.507263 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.507361 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.507379 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.507406 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.507424 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.611461 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.611522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.611540 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.611568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.611589 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.692325 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/3.log" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.692949 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/2.log" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.696056 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" exitCode=1 Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.696105 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.696182 4947 scope.go:117] "RemoveContainer" containerID="46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.697740 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:10:37 crc kubenswrapper[4947]: E0125 00:10:37.698159 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.715077 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.715180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.715203 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.715226 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.715246 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.716689 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.732221 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.756940 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:37Z\\\",\\\"message\\\":\\\"5 00:10:36.980299 6986 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:36.980305 6986 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:36.980311 6986 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:36.980181 6986 lb_config.go:1031] Cluster endpoints for openshift-ingress/router-internal-default for network=default are: map[]\\\\nI0125 00:10:36.980302 6986 services_controller.go:443] Built service openshift-machine-api/control-plane-machine-set-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.41\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0125 00:10:36.980332 6986 services_controller.go:443] Built service openshift-ingress/router-internal-default LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.176\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:80, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.772324 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.787643 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.799764 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.811607 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.817752 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.817813 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.817831 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.817857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.817877 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.828497 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.842438 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:30Z\\\",\\\"message\\\":\\\"2026-01-25T00:09:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf\\\\n2026-01-25T00:09:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf to /host/opt/cni/bin/\\\\n2026-01-25T00:09:45Z [verbose] multus-daemon started\\\\n2026-01-25T00:09:45Z [verbose] Readiness Indicator file check\\\\n2026-01-25T00:10:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.854082 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.865995 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.882104 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.895358 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.914790 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.919836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.919880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.919889 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.919906 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.919918 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:37Z","lastTransitionTime":"2026-01-25T00:10:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.932335 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.951144 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:37 crc kubenswrapper[4947]: I0125 00:10:37.966858 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:37Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.022768 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.022822 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.022836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.022855 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.022867 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.078548 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:45:43.771394302 +0000 UTC Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.088923 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.089014 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.089020 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.089035 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:38 crc kubenswrapper[4947]: E0125 00:10:38.089283 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:38 crc kubenswrapper[4947]: E0125 00:10:38.089385 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:38 crc kubenswrapper[4947]: E0125 00:10:38.089596 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:38 crc kubenswrapper[4947]: E0125 00:10:38.089651 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.126164 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.126206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.126224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.126245 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.126261 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.228938 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.228986 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.228999 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.229018 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.229032 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.331932 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.331986 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.332011 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.332044 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.332067 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.435108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.435206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.435224 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.435256 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.435276 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.537625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.537699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.537723 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.537754 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.537778 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.640612 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.640662 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.640680 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.640703 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.640723 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.702253 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/3.log" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.747505 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.747576 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.747598 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.747624 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.747644 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.850992 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.851061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.851080 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.851108 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.851155 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.953591 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.953631 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.953675 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.953696 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:38 crc kubenswrapper[4947]: I0125 00:10:38.953711 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:38Z","lastTransitionTime":"2026-01-25T00:10:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.056253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.056291 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.056305 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.056356 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.056371 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.078933 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 20:24:03.664078769 +0000 UTC Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.159253 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.159493 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.159510 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.159534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.159550 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.262202 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.262269 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.262288 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.262315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.262345 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.365983 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.366041 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.366053 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.366074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.366432 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.470019 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.470100 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.470165 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.470199 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.470220 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.574068 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.574165 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.574186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.574211 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.574230 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.677013 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.677160 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.677182 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.677208 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.677226 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.780467 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.780514 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.780530 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.780557 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.780575 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.884859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.884941 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.884963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.884994 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.885017 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.989251 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.989293 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.989306 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.989323 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:39 crc kubenswrapper[4947]: I0125 00:10:39.989362 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:39Z","lastTransitionTime":"2026-01-25T00:10:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.079942 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:30:43.218016665 +0000 UTC Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.089291 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.089320 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:40 crc kubenswrapper[4947]: E0125 00:10:40.089466 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.090242 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:40 crc kubenswrapper[4947]: E0125 00:10:40.090494 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.090857 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:40 crc kubenswrapper[4947]: E0125 00:10:40.090911 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:40 crc kubenswrapper[4947]: E0125 00:10:40.090948 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.092830 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.092902 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.092925 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.092955 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.092977 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.105161 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.196304 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.196391 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.196418 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.196455 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.196487 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.300267 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.300333 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.300350 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.300375 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.300394 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.403368 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.403444 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.403473 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.403501 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.403518 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.506867 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.506912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.506926 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.506944 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.506956 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.610220 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.610301 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.610318 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.610344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.610362 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.717699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.717751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.717763 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.717816 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.717831 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.820963 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.821027 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.821049 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.821074 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.821091 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.926265 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.926346 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.926370 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.926399 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:40 crc kubenswrapper[4947]: I0125 00:10:40.926419 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:40Z","lastTransitionTime":"2026-01-25T00:10:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.030058 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.030118 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.030161 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.030186 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.030204 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.080822 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:05:48.535474002 +0000 UTC Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.107501 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.127219 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://66be6a6a3bcff497324f4f7555e1a76afa4251ac11fc7a655eca6028ef960bf6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9ac0df50a131ce2b424d082a232fb450acb74156fe12e6352fdeee6e5cb17b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.132315 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.132378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.132396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.132420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.132438 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.140918 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2w6nd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0a5c5a9a-cc45-4715-8e37-35798d843870\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8354f38cf7b24b1409fc2b31d85d41a7aa7c0fd29dd96dcc7e9efa4705a62d21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gxfgl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2w6nd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.170683 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf5f940-5287-40f1-b208-535cdfcb0054\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46205261a9d494c122819024d60498d1ff3f5de8c0205210be2b3cd5ad48b885\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:08Z\\\",\\\"message\\\":\\\" 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579455 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579460 6597 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:08.579464 6597 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:08.579468 6597 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:08.579476 6597 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579506 6597 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-target-xd92c\\\\nI0125 00:10:08.579511 6597 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-target-xd92c in node crc\\\\nF0125 00:10:08.579524 6597 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped a\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:07Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:37Z\\\",\\\"message\\\":\\\"5 00:10:36.980299 6986 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-9fspn in node crc\\\\nI0125 00:10:36.980305 6986 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-9fspn after 0 failed attempt(s)\\\\nI0125 00:10:36.980311 6986 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-9fspn\\\\nI0125 00:10:36.980181 6986 lb_config.go:1031] Cluster endpoints for openshift-ingress/router-internal-default for network=default are: map[]\\\\nI0125 00:10:36.980302 6986 services_controller.go:443] Built service openshift-machine-api/control-plane-machine-set-operator LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.41\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0125 00:10:36.980332 6986 services_controller.go:443] Built service openshift-ingress/router-internal-default LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.176\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:80, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:10:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xh6bp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fvfwz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.183119 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.198829 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:41Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae7d222fe319a562ba81c268b72566a7e1440056f2e3b710d63798c4d60f7717\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.215520 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.227626 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1ba6cdfc959561c61c9d4864233e541ef778e22b7924e7d6e759fc34ef3f3eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.236951 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.237017 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.237037 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.237061 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.237077 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.240793 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b89f0c74-3c8d-4e3f-8065-9e25a6749dcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aba8ed36fb80ec2f2a828b18644a5b86e9ffa863a2257034f6b320bff447678a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0319a6d643ce35906b65402b009a84f2ecff6acebd669e238ec42c266b94d514\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12b4f69965c641bc78892fbe29416547fc428c071165e04e7b84fc1de22f9d32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a60fff6e7c615ed174ceeb82fcb820e25126804c5371064b184e30dea0c9377\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be3f0b5c706b456cdbab1651582f54f2cdef9342ea5f90f8ee7d18480e7a4581\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0cf27c64969d703711e74b0cd0e5bc016660f88f930962c7582bf1d06f8fadee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cc928f47981a4d1d9b7dc519a26c697ef124515d6a60d07602fabfffe81b273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-sssz9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-kb5q7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.258514 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9fspn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d914454-2c17-47f2-aa53-aba3bfaad296\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-25T00:10:30Z\\\",\\\"message\\\":\\\"2026-01-25T00:09:45+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf\\\\n2026-01-25T00:09:45+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_9993edec-22d0-4587-a147-282f628c2ecf to /host/opt/cni/bin/\\\\n2026-01-25T00:09:45Z [verbose] multus-daemon started\\\\n2026-01-25T00:09:45Z [verbose] Readiness Indicator file check\\\\n2026-01-25T00:10:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:10:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-h9x7t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9fspn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.272828 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hf8gg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f901695-ec8a-4fe2-ba5e-43e346b32ac3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://074b664a43ed6cee24d6ae7bb0eeb54d2935b87a0f4ae7e24df373ebbcb3400a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xzgv2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:45Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hf8gg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.286225 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:56Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qxbvt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:56Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hj7kb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.300815 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"789afdd2-edda-4937-a819-c3b85f8d0725\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5b762e2ae8885b16afd344668f0c248a49e0f6c1d6f61a18f57338816a22b157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f889a071f66dd761907a1bf48f8cc25e8673142d465394d02e27ff57ee687ca3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://740cd532d6c952da06f61d6d5cb7c2d870a8d03283016c4661fb68575c73c3fd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.320290 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c534de12-4879-4815-adf1-b14e38021e2b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-25T00:09:39Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0125 00:09:33.998756 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0125 00:09:34.000416 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1894185698/tls.crt::/tmp/serving-cert-1894185698/tls.key\\\\\\\"\\\\nI0125 00:09:39.834908 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0125 00:09:39.837336 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0125 00:09:39.837358 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0125 00:09:39.837381 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0125 00:09:39.837388 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0125 00:09:39.844058 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0125 00:09:39.844112 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0125 00:09:39.844155 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0125 00:09:39.844148 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0125 00:09:39.844171 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0125 00:09:39.844182 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0125 00:09:39.844190 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0125 00:09:39.844199 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0125 00:09:39.846329 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:23Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.332726 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8dd10641-e4b3-4d72-ba91-ed540316eb7b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:10:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c24388261b500b452e9b238428ea72fc01a87302083b6cddbbcc646b025b088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf9ad4f367bb191845eecb4178669597ee6fc54195b6fa93f8fc17a4c7a43c7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c18c77a13ec1895e24cdc4a6d652e02f94c48856ab226683b6908e6524a6d66b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fc11710e7ca70163b9acf0774e86925fe8f343b57bf487dd2602797aa8577f2d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.339964 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.340043 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.340058 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.340075 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.340086 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.352768 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:40Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.368401 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba95f90e-9162-425c-9ac3-d655ea43cfa0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6a063931e33e98f7125d995ea29ad219e7143c0fd14a4ad57cba7608afd98b6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://33f21149a21977361a24c0bc54887c8ca18a3d72cd6714bc26228b1d049c181a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ggxrj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-nxpzm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.382262 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3d4f57b1-279d-46a6-a753-2f9221644cfa\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20298eba3286e5999a381eba946a8d66115b05b2c0b73c61c7c005aa95bd1f27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e28d9e5ded99984c96a07848bed082c840a86b273e0809a7103e07e789b81147\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e28d9e5ded99984c96a07848bed082c840a86b273e0809a7103e07e789b81147\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-25T00:09:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-25T00:09:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:21Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:41Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.443818 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.443885 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.443912 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.443946 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.443970 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.545589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.545622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.545630 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.545642 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.545650 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.648432 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.648461 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.648469 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.648482 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.648490 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.751420 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.751464 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.751478 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.751496 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.751510 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.854458 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.854514 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.854533 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.854556 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.854574 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.956279 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.956414 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.956426 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.956441 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:41 crc kubenswrapper[4947]: I0125 00:10:41.956453 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:41Z","lastTransitionTime":"2026-01-25T00:10:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.059156 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.059200 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.059212 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.059231 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.059245 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.081409 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 02:28:06.932825344 +0000 UTC Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.088847 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.089009 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:42 crc kubenswrapper[4947]: E0125 00:10:42.089213 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.089291 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.089326 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:42 crc kubenswrapper[4947]: E0125 00:10:42.089956 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:42 crc kubenswrapper[4947]: E0125 00:10:42.090081 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:42 crc kubenswrapper[4947]: E0125 00:10:42.090194 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.161702 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.161817 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.161836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.161857 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.161876 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.263840 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.263965 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.263985 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.264007 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.264023 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.366825 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.366880 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.366897 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.366919 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.366935 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.469532 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.469580 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.469599 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.469622 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.469639 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.572794 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.572862 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.572886 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.572914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.572935 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.675634 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.675701 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.675721 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.675750 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.675771 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.781544 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.782095 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.782114 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.782167 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.782186 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.885448 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.885522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.885540 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.885568 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.885592 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.988859 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.988932 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.988946 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.988968 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:42 crc kubenswrapper[4947]: I0125 00:10:42.988985 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:42Z","lastTransitionTime":"2026-01-25T00:10:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.081737 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 12:47:08.668750045 +0000 UTC Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.092588 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.092635 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.092647 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.092665 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.092678 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.195540 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.195607 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.195625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.195651 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.195670 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.298628 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.298693 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.298710 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.298733 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.298751 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.402281 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.402344 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.402363 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.402386 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.402403 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.505327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.505393 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.505412 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.505438 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.505457 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.608853 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.608914 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.608936 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.608967 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.608989 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.711581 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.711625 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.711645 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.711672 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.711690 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.814589 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.814645 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.814663 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.814688 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.814704 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.888709 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.888888 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.888853956 +0000 UTC m=+147.121844436 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.888954 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.889069 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.889221 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.889285 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889296 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889368 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889368 4947 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889394 4947 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889478 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.889451991 +0000 UTC m=+147.122442471 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889516 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.889500642 +0000 UTC m=+147.122491122 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889530 4947 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.889672 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.889643635 +0000 UTC m=+147.122634115 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.890109 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.890201 4947 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.890230 4947 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:43 crc kubenswrapper[4947]: E0125 00:10:43.890620 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.890327891 +0000 UTC m=+147.123318381 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.918764 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.918833 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.918852 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.918876 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:43 crc kubenswrapper[4947]: I0125 00:10:43.918893 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:43Z","lastTransitionTime":"2026-01-25T00:10:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.022522 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.022588 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.022611 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.022641 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.022663 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.082311 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:02:42.275813165 +0000 UTC Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.088612 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.088685 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:44 crc kubenswrapper[4947]: E0125 00:10:44.088834 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.089080 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:44 crc kubenswrapper[4947]: E0125 00:10:44.089262 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.089479 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:44 crc kubenswrapper[4947]: E0125 00:10:44.089494 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:44 crc kubenswrapper[4947]: E0125 00:10:44.089785 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.126080 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.126180 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.126206 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.126229 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.126247 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.229585 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.229649 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.229666 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.229699 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.229720 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.332809 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.332874 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.332898 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.332927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.332949 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.436046 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.436110 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.436160 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.436185 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.436202 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.539479 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.539562 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.539585 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.539615 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.539637 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.642327 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.642376 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.642387 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.642403 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.642414 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.745487 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.745563 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.745581 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.745608 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.745628 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.848797 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.848860 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.848877 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.848901 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.848918 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.951891 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.951952 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.951970 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.951991 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:44 crc kubenswrapper[4947]: I0125 00:10:44.952010 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:44Z","lastTransitionTime":"2026-01-25T00:10:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.054836 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.054898 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.054915 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.054939 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.054957 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.082737 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:06:06.269193241 +0000 UTC Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.157326 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.157380 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.157396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.157418 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.157436 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.188820 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.189971 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:10:45 crc kubenswrapper[4947]: E0125 00:10:45.190252 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.210346 4947 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5f67ec28-baae-409e-a42d-03a486e7a26b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-25T00:09:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938ce96d13fc92240f4f960f3d92fea639ed7ff5864ca4b39b057161ca128013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-25T00:09:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hqztm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-25T00:09:42Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mdgrh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-25T00:10:45Z is after 2025-08-24T17:21:41Z" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.260738 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.260783 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.260801 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.260823 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.260839 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.318058 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2w6nd" podStartSLOduration=64.318029079 podStartE2EDuration="1m4.318029079s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.280793701 +0000 UTC m=+84.513784171" watchObservedRunningTime="2026-01-25 00:10:45.318029079 +0000 UTC m=+84.551019559" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.362690 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.362725 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.362735 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.362751 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.362762 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.376892 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-kb5q7" podStartSLOduration=64.376865442 podStartE2EDuration="1m4.376865442s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.351648828 +0000 UTC m=+84.584639318" watchObservedRunningTime="2026-01-25 00:10:45.376865442 +0000 UTC m=+84.609855912" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.377088 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9fspn" podStartSLOduration=64.377080757 podStartE2EDuration="1m4.377080757s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.375523519 +0000 UTC m=+84.608514029" watchObservedRunningTime="2026-01-25 00:10:45.377080757 +0000 UTC m=+84.610071237" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.393593 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hf8gg" podStartSLOduration=64.393569879 podStartE2EDuration="1m4.393569879s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.392976424 +0000 UTC m=+84.625966884" watchObservedRunningTime="2026-01-25 00:10:45.393569879 +0000 UTC m=+84.626560359" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.433552 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.433524652 podStartE2EDuration="1m6.433524652s" podCreationTimestamp="2026-01-25 00:09:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.432739353 +0000 UTC m=+84.665729813" watchObservedRunningTime="2026-01-25 00:10:45.433524652 +0000 UTC m=+84.666515172" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.464485 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.464534 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.464542 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.464555 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.464565 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.534984 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-nxpzm" podStartSLOduration=64.534961253 podStartE2EDuration="1m4.534961253s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.534282086 +0000 UTC m=+84.767272526" watchObservedRunningTime="2026-01-25 00:10:45.534961253 +0000 UTC m=+84.767951723" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.548080 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.548053681 podStartE2EDuration="5.548053681s" podCreationTimestamp="2026-01-25 00:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.547463598 +0000 UTC m=+84.780454068" watchObservedRunningTime="2026-01-25 00:10:45.548053681 +0000 UTC m=+84.781044181" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.566606 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.566808 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.566867 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.566927 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.567006 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.586631 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=65.586604201 podStartE2EDuration="1m5.586604201s" podCreationTimestamp="2026-01-25 00:09:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.569253108 +0000 UTC m=+84.802243628" watchObservedRunningTime="2026-01-25 00:10:45.586604201 +0000 UTC m=+84.819594681" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.653319 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.653378 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.653396 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.653421 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.653438 4947 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-25T00:10:45Z","lastTransitionTime":"2026-01-25T00:10:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.708227 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=38.708193483 podStartE2EDuration="38.708193483s" podCreationTimestamp="2026-01-25 00:10:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.585594656 +0000 UTC m=+84.818585096" watchObservedRunningTime="2026-01-25 00:10:45.708193483 +0000 UTC m=+84.941183963" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.708439 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb"] Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.709067 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.713425 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.713819 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.714995 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.715308 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.731409 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podStartSLOduration=64.731388048 podStartE2EDuration="1m4.731388048s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:45.731345446 +0000 UTC m=+84.964335916" watchObservedRunningTime="2026-01-25 00:10:45.731388048 +0000 UTC m=+84.964378508" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.811285 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceb79108-eaf9-42eb-9d3a-125e321f4004-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.811402 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.811436 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.811519 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceb79108-eaf9-42eb-9d3a-125e321f4004-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.811580 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ceb79108-eaf9-42eb-9d3a-125e321f4004-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.913677 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceb79108-eaf9-42eb-9d3a-125e321f4004-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.913843 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.913898 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.913932 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceb79108-eaf9-42eb-9d3a-125e321f4004-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.913972 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ceb79108-eaf9-42eb-9d3a-125e321f4004-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.914902 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.915039 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ceb79108-eaf9-42eb-9d3a-125e321f4004-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.915541 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ceb79108-eaf9-42eb-9d3a-125e321f4004-service-ca\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.926622 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ceb79108-eaf9-42eb-9d3a-125e321f4004-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:45 crc kubenswrapper[4947]: I0125 00:10:45.963545 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ceb79108-eaf9-42eb-9d3a-125e321f4004-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-nrhlb\" (UID: \"ceb79108-eaf9-42eb-9d3a-125e321f4004\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.023406 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.082894 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:51:00.760056424 +0000 UTC Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.083001 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.089580 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.089667 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:46 crc kubenswrapper[4947]: E0125 00:10:46.089750 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.089693 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.089668 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:46 crc kubenswrapper[4947]: E0125 00:10:46.090005 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:46 crc kubenswrapper[4947]: E0125 00:10:46.090059 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:46 crc kubenswrapper[4947]: E0125 00:10:46.090191 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.096569 4947 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 25 00:10:46 crc kubenswrapper[4947]: I0125 00:10:46.744650 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" event={"ID":"ceb79108-eaf9-42eb-9d3a-125e321f4004","Type":"ContainerStarted","Data":"9d57fef4f0074537df38a77431aafeb8bb062633ba4720c1650ce7c4a8711b09"} Jan 25 00:10:47 crc kubenswrapper[4947]: I0125 00:10:47.750225 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" event={"ID":"ceb79108-eaf9-42eb-9d3a-125e321f4004","Type":"ContainerStarted","Data":"8f650e3519d18c65f98b862b0a2197200c291afbf46ff4bb8f6930702f67e577"} Jan 25 00:10:47 crc kubenswrapper[4947]: I0125 00:10:47.770584 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-nrhlb" podStartSLOduration=66.770560899 podStartE2EDuration="1m6.770560899s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:10:47.769730479 +0000 UTC m=+87.002720999" watchObservedRunningTime="2026-01-25 00:10:47.770560899 +0000 UTC m=+87.003551369" Jan 25 00:10:48 crc kubenswrapper[4947]: I0125 00:10:48.089177 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:48 crc kubenswrapper[4947]: I0125 00:10:48.089257 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:48 crc kubenswrapper[4947]: E0125 00:10:48.089341 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:48 crc kubenswrapper[4947]: I0125 00:10:48.089392 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:48 crc kubenswrapper[4947]: E0125 00:10:48.089471 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:48 crc kubenswrapper[4947]: E0125 00:10:48.089676 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:48 crc kubenswrapper[4947]: I0125 00:10:48.090079 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:48 crc kubenswrapper[4947]: E0125 00:10:48.090422 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:50 crc kubenswrapper[4947]: I0125 00:10:50.089772 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:50 crc kubenswrapper[4947]: I0125 00:10:50.089800 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:50 crc kubenswrapper[4947]: I0125 00:10:50.089824 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:50 crc kubenswrapper[4947]: I0125 00:10:50.090028 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:50 crc kubenswrapper[4947]: E0125 00:10:50.090244 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:50 crc kubenswrapper[4947]: E0125 00:10:50.090397 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:50 crc kubenswrapper[4947]: E0125 00:10:50.090650 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:50 crc kubenswrapper[4947]: E0125 00:10:50.090768 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:52 crc kubenswrapper[4947]: I0125 00:10:52.089324 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:52 crc kubenswrapper[4947]: I0125 00:10:52.089354 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:52 crc kubenswrapper[4947]: E0125 00:10:52.089624 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:52 crc kubenswrapper[4947]: I0125 00:10:52.089697 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:52 crc kubenswrapper[4947]: E0125 00:10:52.089965 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:52 crc kubenswrapper[4947]: E0125 00:10:52.090025 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:52 crc kubenswrapper[4947]: I0125 00:10:52.090324 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:52 crc kubenswrapper[4947]: E0125 00:10:52.090536 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:52 crc kubenswrapper[4947]: I0125 00:10:52.122093 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 25 00:10:54 crc kubenswrapper[4947]: I0125 00:10:54.089766 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:54 crc kubenswrapper[4947]: E0125 00:10:54.090010 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:54 crc kubenswrapper[4947]: I0125 00:10:54.090183 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:54 crc kubenswrapper[4947]: E0125 00:10:54.090318 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:54 crc kubenswrapper[4947]: I0125 00:10:54.090451 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:54 crc kubenswrapper[4947]: I0125 00:10:54.090451 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:54 crc kubenswrapper[4947]: E0125 00:10:54.090737 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:54 crc kubenswrapper[4947]: E0125 00:10:54.090810 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:56 crc kubenswrapper[4947]: I0125 00:10:56.089576 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:56 crc kubenswrapper[4947]: I0125 00:10:56.089630 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:56 crc kubenswrapper[4947]: I0125 00:10:56.089679 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:56 crc kubenswrapper[4947]: I0125 00:10:56.089607 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:56 crc kubenswrapper[4947]: E0125 00:10:56.089779 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:56 crc kubenswrapper[4947]: E0125 00:10:56.089941 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:56 crc kubenswrapper[4947]: E0125 00:10:56.090088 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:56 crc kubenswrapper[4947]: E0125 00:10:56.090213 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:10:58 crc kubenswrapper[4947]: I0125 00:10:58.089395 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:10:58 crc kubenswrapper[4947]: I0125 00:10:58.089530 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:10:58 crc kubenswrapper[4947]: I0125 00:10:58.090043 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:10:58 crc kubenswrapper[4947]: E0125 00:10:58.090262 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:10:58 crc kubenswrapper[4947]: I0125 00:10:58.090412 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:10:58 crc kubenswrapper[4947]: I0125 00:10:58.090791 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:10:58 crc kubenswrapper[4947]: E0125 00:10:58.090940 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:10:58 crc kubenswrapper[4947]: E0125 00:10:58.091081 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:10:58 crc kubenswrapper[4947]: E0125 00:10:58.091187 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:10:58 crc kubenswrapper[4947]: E0125 00:10:58.091240 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:00 crc kubenswrapper[4947]: I0125 00:11:00.088746 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:00 crc kubenswrapper[4947]: I0125 00:11:00.088806 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:00 crc kubenswrapper[4947]: I0125 00:11:00.088779 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.088947 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:00 crc kubenswrapper[4947]: I0125 00:11:00.088990 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.089094 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.089172 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.089250 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:00 crc kubenswrapper[4947]: I0125 00:11:00.796781 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.797062 4947 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:11:00 crc kubenswrapper[4947]: E0125 00:11:00.797215 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs podName:9a64fbf1-68fc-4379-9bb7-009c4f2cc812 nodeName:}" failed. No retries permitted until 2026-01-25 00:12:04.797183901 +0000 UTC m=+164.030174371 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs") pod "network-metrics-daemon-hj7kb" (UID: "9a64fbf1-68fc-4379-9bb7-009c4f2cc812") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 25 00:11:01 crc kubenswrapper[4947]: I0125 00:11:01.133215 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=9.133189015 podStartE2EDuration="9.133189015s" podCreationTimestamp="2026-01-25 00:10:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:01.130709745 +0000 UTC m=+100.363700225" watchObservedRunningTime="2026-01-25 00:11:01.133189015 +0000 UTC m=+100.366179495" Jan 25 00:11:02 crc kubenswrapper[4947]: I0125 00:11:02.089251 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:02 crc kubenswrapper[4947]: I0125 00:11:02.089323 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:02 crc kubenswrapper[4947]: I0125 00:11:02.089391 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:02 crc kubenswrapper[4947]: E0125 00:11:02.089468 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:02 crc kubenswrapper[4947]: I0125 00:11:02.089530 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:02 crc kubenswrapper[4947]: E0125 00:11:02.089621 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:02 crc kubenswrapper[4947]: E0125 00:11:02.089747 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:02 crc kubenswrapper[4947]: E0125 00:11:02.089918 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:04 crc kubenswrapper[4947]: I0125 00:11:04.089689 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:04 crc kubenswrapper[4947]: E0125 00:11:04.089931 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:04 crc kubenswrapper[4947]: I0125 00:11:04.090047 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:04 crc kubenswrapper[4947]: E0125 00:11:04.090174 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:04 crc kubenswrapper[4947]: I0125 00:11:04.090261 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:04 crc kubenswrapper[4947]: I0125 00:11:04.090398 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:04 crc kubenswrapper[4947]: E0125 00:11:04.090394 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:04 crc kubenswrapper[4947]: E0125 00:11:04.090541 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:06 crc kubenswrapper[4947]: I0125 00:11:06.089307 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:06 crc kubenswrapper[4947]: I0125 00:11:06.089419 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:06 crc kubenswrapper[4947]: I0125 00:11:06.089460 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:06 crc kubenswrapper[4947]: I0125 00:11:06.089622 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:06 crc kubenswrapper[4947]: E0125 00:11:06.090191 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:06 crc kubenswrapper[4947]: E0125 00:11:06.090469 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:06 crc kubenswrapper[4947]: E0125 00:11:06.090619 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:06 crc kubenswrapper[4947]: E0125 00:11:06.090730 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:08 crc kubenswrapper[4947]: I0125 00:11:08.088887 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:08 crc kubenswrapper[4947]: I0125 00:11:08.088981 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:08 crc kubenswrapper[4947]: I0125 00:11:08.088912 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:08 crc kubenswrapper[4947]: E0125 00:11:08.089185 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:08 crc kubenswrapper[4947]: E0125 00:11:08.089329 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:08 crc kubenswrapper[4947]: E0125 00:11:08.089596 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:08 crc kubenswrapper[4947]: I0125 00:11:08.090945 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:08 crc kubenswrapper[4947]: E0125 00:11:08.091422 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:10 crc kubenswrapper[4947]: I0125 00:11:10.089648 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:10 crc kubenswrapper[4947]: I0125 00:11:10.089648 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:10 crc kubenswrapper[4947]: E0125 00:11:10.089859 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:10 crc kubenswrapper[4947]: I0125 00:11:10.089883 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:10 crc kubenswrapper[4947]: I0125 00:11:10.089798 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:10 crc kubenswrapper[4947]: E0125 00:11:10.090047 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:10 crc kubenswrapper[4947]: E0125 00:11:10.090238 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:10 crc kubenswrapper[4947]: E0125 00:11:10.090358 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:12 crc kubenswrapper[4947]: I0125 00:11:12.088661 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:12 crc kubenswrapper[4947]: I0125 00:11:12.088676 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:12 crc kubenswrapper[4947]: I0125 00:11:12.088782 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:12 crc kubenswrapper[4947]: I0125 00:11:12.089487 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:12 crc kubenswrapper[4947]: E0125 00:11:12.089664 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:12 crc kubenswrapper[4947]: E0125 00:11:12.089944 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:12 crc kubenswrapper[4947]: E0125 00:11:12.090108 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:12 crc kubenswrapper[4947]: E0125 00:11:12.090236 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:13 crc kubenswrapper[4947]: I0125 00:11:13.090647 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:11:13 crc kubenswrapper[4947]: E0125 00:11:13.090897 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fvfwz_openshift-ovn-kubernetes(8bf5f940-5287-40f1-b208-535cdfcb0054)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" Jan 25 00:11:14 crc kubenswrapper[4947]: I0125 00:11:14.088871 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:14 crc kubenswrapper[4947]: I0125 00:11:14.088902 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:14 crc kubenswrapper[4947]: I0125 00:11:14.088902 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:14 crc kubenswrapper[4947]: E0125 00:11:14.089702 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:14 crc kubenswrapper[4947]: E0125 00:11:14.089842 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:14 crc kubenswrapper[4947]: I0125 00:11:14.088957 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:14 crc kubenswrapper[4947]: E0125 00:11:14.089956 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:14 crc kubenswrapper[4947]: E0125 00:11:14.090267 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:16 crc kubenswrapper[4947]: I0125 00:11:16.088786 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:16 crc kubenswrapper[4947]: I0125 00:11:16.088801 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:16 crc kubenswrapper[4947]: E0125 00:11:16.089956 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:16 crc kubenswrapper[4947]: I0125 00:11:16.088838 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:16 crc kubenswrapper[4947]: E0125 00:11:16.090587 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:16 crc kubenswrapper[4947]: I0125 00:11:16.088811 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:16 crc kubenswrapper[4947]: E0125 00:11:16.090944 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:16 crc kubenswrapper[4947]: E0125 00:11:16.090257 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.868562 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/1.log" Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.869244 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/0.log" Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.869331 4947 generic.go:334] "Generic (PLEG): container finished" podID="2d914454-2c17-47f2-aa53-aba3bfaad296" containerID="6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470" exitCode=1 Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.869383 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerDied","Data":"6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470"} Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.869451 4947 scope.go:117] "RemoveContainer" containerID="e77a203feacfa670f3ef57ba2aa4feb4742c672d3afffb2a2bf2f665f09f8656" Jan 25 00:11:17 crc kubenswrapper[4947]: I0125 00:11:17.870245 4947 scope.go:117] "RemoveContainer" containerID="6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470" Jan 25 00:11:17 crc kubenswrapper[4947]: E0125 00:11:17.870645 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9fspn_openshift-multus(2d914454-2c17-47f2-aa53-aba3bfaad296)\"" pod="openshift-multus/multus-9fspn" podUID="2d914454-2c17-47f2-aa53-aba3bfaad296" Jan 25 00:11:18 crc kubenswrapper[4947]: I0125 00:11:18.088877 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:18 crc kubenswrapper[4947]: I0125 00:11:18.088954 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:18 crc kubenswrapper[4947]: I0125 00:11:18.088877 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:18 crc kubenswrapper[4947]: E0125 00:11:18.089073 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:18 crc kubenswrapper[4947]: I0125 00:11:18.089120 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:18 crc kubenswrapper[4947]: E0125 00:11:18.089389 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:18 crc kubenswrapper[4947]: E0125 00:11:18.089551 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:18 crc kubenswrapper[4947]: E0125 00:11:18.089682 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:18 crc kubenswrapper[4947]: I0125 00:11:18.876323 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/1.log" Jan 25 00:11:20 crc kubenswrapper[4947]: I0125 00:11:20.089455 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:20 crc kubenswrapper[4947]: I0125 00:11:20.089533 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:20 crc kubenswrapper[4947]: I0125 00:11:20.089533 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:20 crc kubenswrapper[4947]: I0125 00:11:20.089641 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:20 crc kubenswrapper[4947]: E0125 00:11:20.089786 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:20 crc kubenswrapper[4947]: E0125 00:11:20.089630 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:20 crc kubenswrapper[4947]: E0125 00:11:20.089935 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:20 crc kubenswrapper[4947]: E0125 00:11:20.090011 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:21 crc kubenswrapper[4947]: E0125 00:11:21.123980 4947 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 25 00:11:21 crc kubenswrapper[4947]: E0125 00:11:21.184931 4947 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 25 00:11:22 crc kubenswrapper[4947]: I0125 00:11:22.090556 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:22 crc kubenswrapper[4947]: E0125 00:11:22.091360 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:22 crc kubenswrapper[4947]: I0125 00:11:22.090702 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:22 crc kubenswrapper[4947]: E0125 00:11:22.091505 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:22 crc kubenswrapper[4947]: I0125 00:11:22.090712 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:22 crc kubenswrapper[4947]: E0125 00:11:22.091714 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:22 crc kubenswrapper[4947]: I0125 00:11:22.090638 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:22 crc kubenswrapper[4947]: E0125 00:11:22.091806 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.089954 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.090011 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.090049 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:24 crc kubenswrapper[4947]: E0125 00:11:24.090291 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.090317 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:24 crc kubenswrapper[4947]: E0125 00:11:24.090563 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:24 crc kubenswrapper[4947]: E0125 00:11:24.091056 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:24 crc kubenswrapper[4947]: E0125 00:11:24.091203 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.091532 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.918085 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/3.log" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.921081 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerStarted","Data":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.921855 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:11:24 crc kubenswrapper[4947]: I0125 00:11:24.956450 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podStartSLOduration=103.956429305 podStartE2EDuration="1m43.956429305s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:24.955193055 +0000 UTC m=+124.188183545" watchObservedRunningTime="2026-01-25 00:11:24.956429305 +0000 UTC m=+124.189419775" Jan 25 00:11:25 crc kubenswrapper[4947]: I0125 00:11:25.132179 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hj7kb"] Jan 25 00:11:25 crc kubenswrapper[4947]: I0125 00:11:25.132311 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:25 crc kubenswrapper[4947]: E0125 00:11:25.132435 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:26 crc kubenswrapper[4947]: I0125 00:11:26.089279 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:26 crc kubenswrapper[4947]: I0125 00:11:26.089367 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:26 crc kubenswrapper[4947]: I0125 00:11:26.089430 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:26 crc kubenswrapper[4947]: E0125 00:11:26.089758 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:26 crc kubenswrapper[4947]: E0125 00:11:26.089954 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:26 crc kubenswrapper[4947]: E0125 00:11:26.090229 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:26 crc kubenswrapper[4947]: E0125 00:11:26.186604 4947 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 25 00:11:27 crc kubenswrapper[4947]: I0125 00:11:27.089343 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:27 crc kubenswrapper[4947]: E0125 00:11:27.089558 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:28 crc kubenswrapper[4947]: I0125 00:11:28.089425 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:28 crc kubenswrapper[4947]: E0125 00:11:28.089610 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:28 crc kubenswrapper[4947]: I0125 00:11:28.089874 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:28 crc kubenswrapper[4947]: E0125 00:11:28.089964 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:28 crc kubenswrapper[4947]: I0125 00:11:28.090164 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:28 crc kubenswrapper[4947]: E0125 00:11:28.090326 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:29 crc kubenswrapper[4947]: I0125 00:11:29.089560 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:29 crc kubenswrapper[4947]: E0125 00:11:29.089880 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:30 crc kubenswrapper[4947]: I0125 00:11:30.089047 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:30 crc kubenswrapper[4947]: I0125 00:11:30.089106 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:30 crc kubenswrapper[4947]: I0125 00:11:30.089210 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:30 crc kubenswrapper[4947]: E0125 00:11:30.089274 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:30 crc kubenswrapper[4947]: E0125 00:11:30.089415 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:30 crc kubenswrapper[4947]: E0125 00:11:30.089529 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:31 crc kubenswrapper[4947]: I0125 00:11:31.089484 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:31 crc kubenswrapper[4947]: E0125 00:11:31.091286 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:31 crc kubenswrapper[4947]: E0125 00:11:31.187379 4947 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.089352 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.089525 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:32 crc kubenswrapper[4947]: E0125 00:11:32.089638 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.089663 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:32 crc kubenswrapper[4947]: E0125 00:11:32.089851 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:32 crc kubenswrapper[4947]: E0125 00:11:32.090068 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.090454 4947 scope.go:117] "RemoveContainer" containerID="6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.959631 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/1.log" Jan 25 00:11:32 crc kubenswrapper[4947]: I0125 00:11:32.960232 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerStarted","Data":"c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032"} Jan 25 00:11:33 crc kubenswrapper[4947]: I0125 00:11:33.088818 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:33 crc kubenswrapper[4947]: E0125 00:11:33.088985 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:34 crc kubenswrapper[4947]: I0125 00:11:34.089724 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:34 crc kubenswrapper[4947]: I0125 00:11:34.089844 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:34 crc kubenswrapper[4947]: E0125 00:11:34.089941 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:34 crc kubenswrapper[4947]: E0125 00:11:34.090046 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:34 crc kubenswrapper[4947]: I0125 00:11:34.090213 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:34 crc kubenswrapper[4947]: E0125 00:11:34.090320 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:35 crc kubenswrapper[4947]: I0125 00:11:35.089012 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:35 crc kubenswrapper[4947]: E0125 00:11:35.089209 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hj7kb" podUID="9a64fbf1-68fc-4379-9bb7-009c4f2cc812" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.088976 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.088976 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.089090 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:36 crc kubenswrapper[4947]: E0125 00:11:36.089264 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 25 00:11:36 crc kubenswrapper[4947]: E0125 00:11:36.089415 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 25 00:11:36 crc kubenswrapper[4947]: E0125 00:11:36.089555 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.677110 4947 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.739996 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nt9qq"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.740630 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.745294 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.745648 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.745698 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.746022 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.746077 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.746107 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.751624 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmsjj"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.752183 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.754658 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.757685 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758665 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758679 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758780 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758782 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758874 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.758880 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.760840 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.760885 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.766045 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.766291 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.781057 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.788685 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.790738 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.791267 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.791359 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.791796 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.793849 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.794053 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7kcc9"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.794640 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.794671 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-pruner-29488320-jf979"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.795052 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.797781 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.798300 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.802422 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.803092 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.803688 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.803950 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.804643 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.805096 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.805573 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2plqs"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.806034 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.807358 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.807547 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.807714 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.807851 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.807924 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"serviceca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808022 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808100 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808206 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808283 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808334 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808435 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808469 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808595 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808623 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808673 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808747 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808767 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808842 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.808963 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.809092 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.809355 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.809530 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"pruner-dockercfg-p7bcw" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.809775 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.812312 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xwjmr"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.812952 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.813476 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-5zvdg"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.814009 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.814688 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.815472 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.816931 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.823090 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.823407 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.823624 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.823859 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.823980 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.824114 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.824325 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.824515 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.824735 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.824877 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.825357 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.825499 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.825571 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.825424 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.825964 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.826561 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.827205 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.827289 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5nscb"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.827309 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.827813 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.828108 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848663 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbbs4\" (UniqueName: \"kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848708 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-service-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848733 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848767 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848789 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848821 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848839 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-config\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848860 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848878 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848897 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848927 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848964 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.848988 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm9wz\" (UniqueName: \"kubernetes.io/projected/bf8b174c-d1eb-4a2d-88c2-113302fa2300-kube-api-access-tm9wz\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849009 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849030 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849049 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8b174c-d1eb-4a2d-88c2-113302fa2300-serving-cert\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849067 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849089 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.849108 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.828771 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h6jgn"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.850316 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.850894 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.872932 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zjf9d"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.873534 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.873869 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.874077 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.874377 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.875017 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.876485 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.876584 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.877524 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-95tmb"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.877881 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.878110 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.891182 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.892117 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.892259 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.893090 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.895709 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.896718 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.898454 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.903620 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.904434 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.904970 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.905450 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.905484 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.905931 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.906564 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.906790 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.906823 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.907549 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.911014 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.911608 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.911956 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.912238 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.922715 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.925268 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.925573 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.926104 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.926707 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.926853 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.938202 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.938796 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nt9qq"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.938885 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.943593 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.943763 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lz644"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.944531 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.944869 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.946026 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.946168 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951619 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-service-ca\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951664 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-trusted-ca\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951695 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951725 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951754 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951776 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cba929a-19da-479b-b9fb-b4cffaaba4c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951794 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951809 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-node-pullsecrets\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951829 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8b174c-d1eb-4a2d-88c2-113302fa2300-serving-cert\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951854 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ffdf66-0472-4e1f-9ea6-869acc338d0e-config\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951875 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951890 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-serving-cert\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951905 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf4jd\" (UniqueName: \"kubernetes.io/projected/2244349f-df5c-4813-a0e7-418a602f57b0-kube-api-access-mf4jd\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951924 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6msz\" (UniqueName: \"kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951940 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-etcd-client\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951960 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951975 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-oauth-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.951992 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952008 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-default-certificate\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952027 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952044 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-service-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952059 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-stats-auth\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952077 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952094 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbbs4\" (UniqueName: \"kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952109 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-service-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952144 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c613148-89dd-4904-b721-c90f6a0f89ba-serving-cert\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952160 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b50bda2b-e707-456e-af02-796b6d9a4cdf-metrics-tls\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952176 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952191 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqkd6\" (UniqueName: \"kubernetes.io/projected/4c613148-89dd-4904-b721-c90f6a0f89ba-kube-api-access-qqkd6\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952210 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952226 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhc9z\" (UniqueName: \"kubernetes.io/projected/b8f2f610-05dc-49ea-882e-634d283b3caa-kube-api-access-dhc9z\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952244 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsx9g\" (UniqueName: \"kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952258 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-serving-cert\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952273 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4qnc\" (UniqueName: \"kubernetes.io/projected/222d5540-6b86-404a-b787-ea6a6043206a-kube-api-access-s4qnc\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952296 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-console-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952311 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mtfs\" (UniqueName: \"kubernetes.io/projected/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-kube-api-access-9mtfs\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952328 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952345 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952380 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952396 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952412 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952426 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952442 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-etcd-serving-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952464 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90a4381e-451b-4940-932a-efba1d101c81-machine-approver-tls\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952482 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-config\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952496 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952510 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952526 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952541 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-trusted-ca-bundle\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952556 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-config\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952569 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952585 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952601 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6xwl\" (UniqueName: \"kubernetes.io/projected/0e97ae5e-35ab-41e9-aa03-ad060bbbd676-kube-api-access-z6xwl\") pod \"downloads-7954f5f757-5zvdg\" (UID: \"0e97ae5e-35ab-41e9-aa03-ad060bbbd676\") " pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952615 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952632 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmpf6\" (UniqueName: \"kubernetes.io/projected/8cba929a-19da-479b-b9fb-b4cffaaba4c2-kube-api-access-vmpf6\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952647 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-encryption-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952661 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xtn8\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-kube-api-access-2xtn8\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952675 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-metrics-certs\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952691 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952705 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ffdf66-0472-4e1f-9ea6-869acc338d0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952720 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-image-import-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952735 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvf82\" (UniqueName: \"kubernetes.io/projected/4e8662e0-1de8-4371-8836-214a0394675c-kube-api-access-lvf82\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952749 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmh4c\" (UniqueName: \"kubernetes.io/projected/90a4381e-451b-4940-932a-efba1d101c81-kube-api-access-bmh4c\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952763 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-config\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952778 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16240ac3-819b-4e68-bca9-c97c94599fbb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952793 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79a96518-940a-4490-9067-9e2f873753f7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952807 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4p7b\" (UniqueName: \"kubernetes.io/projected/79a96518-940a-4490-9067-9e2f873753f7-kube-api-access-l4p7b\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952822 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952844 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2244349f-df5c-4813-a0e7-418a602f57b0-service-ca-bundle\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952861 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952875 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8cba929a-19da-479b-b9fb-b4cffaaba4c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952891 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9twfs\" (UniqueName: \"kubernetes.io/projected/b50bda2b-e707-456e-af02-796b6d9a4cdf-kube-api-access-9twfs\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952908 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njhj5\" (UniqueName: \"kubernetes.io/projected/16240ac3-819b-4e68-bca9-c97c94599fbb-kube-api-access-njhj5\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952923 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-images\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952940 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-oauth-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952955 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952970 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.952984 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-etcd-client\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953013 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953027 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f2f610-05dc-49ea-882e-634d283b3caa-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953042 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60ffdf66-0472-4e1f-9ea6-869acc338d0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953058 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x956c\" (UniqueName: \"kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953072 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-config\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953088 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953104 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tm9wz\" (UniqueName: \"kubernetes.io/projected/bf8b174c-d1eb-4a2d-88c2-113302fa2300-kube-api-access-tm9wz\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953118 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16240ac3-819b-4e68-bca9-c97c94599fbb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953147 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-auth-proxy-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953164 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4jhb\" (UniqueName: \"kubernetes.io/projected/49c456f9-6cbf-4e3c-992a-8636357253ad-kube-api-access-l4jhb\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953179 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953194 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953209 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-audit\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.953224 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-audit-dir\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.956785 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.957858 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.963749 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.967337 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.968166 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.980866 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.981705 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.984373 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.984409 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf8b174c-d1eb-4a2d-88c2-113302fa2300-serving-cert\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.995543 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.998243 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmsjj"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.998283 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-k7fhc"] Jan 25 00:11:36 crc kubenswrapper[4947]: I0125 00:11:36.998949 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.000572 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-config\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.001297 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.001577 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.001703 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.002394 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-service-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.005968 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.004419 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.008735 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.008760 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qgqfk"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.009700 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.009932 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.010247 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.010506 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.013274 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.013549 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.013885 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.014197 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.014455 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.014630 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.014718 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.014847 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.015399 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.015773 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.017244 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.017446 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.025716 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.026992 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf8b174c-d1eb-4a2d-88c2-113302fa2300-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027227 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027486 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027600 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027711 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027842 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.027900 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.028145 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.028402 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.028524 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.029083 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.029789 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.031599 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.031751 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.031854 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033296 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033469 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033415 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033693 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033739 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033896 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.033931 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.034003 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.034055 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.034145 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.034175 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.036492 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.038850 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.038971 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.040586 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29488320-jf979"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.041921 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.042930 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7kcc9"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.045838 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2plqs"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.045867 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.046462 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.047670 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.048213 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.049505 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xwjmr"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.052735 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lz644"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.052826 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055175 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055749 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055788 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055816 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9p5g\" (UniqueName: \"kubernetes.io/projected/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-kube-api-access-j9p5g\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055845 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f2f610-05dc-49ea-882e-634d283b3caa-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055870 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16240ac3-819b-4e68-bca9-c97c94599fbb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055889 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-auth-proxy-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055907 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055924 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055946 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7brwl\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-kube-api-access-7brwl\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055967 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e8f132b-916b-4973-9873-5919cb12251c-config\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.055988 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e8f132b-916b-4973-9873-5919cb12251c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056009 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-trusted-ca\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056027 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056046 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cba929a-19da-479b-b9fb-b4cffaaba4c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056065 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-dir\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056084 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056101 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-node-pullsecrets\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056135 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056153 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056170 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx5zf\" (UniqueName: \"kubernetes.io/projected/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-kube-api-access-jx5zf\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056188 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6msz\" (UniqueName: \"kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056208 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-serving-cert\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056230 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056248 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-etcd-client\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056267 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e230409-6e68-4f7c-b0c3-3e55433b22c1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056296 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-service-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056317 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-stats-auth\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056341 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056382 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsx9g\" (UniqueName: \"kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056444 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-serving-cert\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056461 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4qnc\" (UniqueName: \"kubernetes.io/projected/222d5540-6b86-404a-b787-ea6a6043206a-kube-api-access-s4qnc\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056479 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mtfs\" (UniqueName: \"kubernetes.io/projected/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-kube-api-access-9mtfs\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056502 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-policies\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056522 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056542 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgmn4\" (UniqueName: \"kubernetes.io/projected/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-kube-api-access-dgmn4\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056560 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056578 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056598 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzfhp\" (UniqueName: \"kubernetes.io/projected/0e8ad493-9466-46d8-8307-13f24463f184-kube-api-access-pzfhp\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056615 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-images\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056633 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-proxy-tls\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056666 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-tmpfs\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056692 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056721 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056750 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6xwl\" (UniqueName: \"kubernetes.io/projected/0e97ae5e-35ab-41e9-aa03-ad060bbbd676-kube-api-access-z6xwl\") pod \"downloads-7954f5f757-5zvdg\" (UID: \"0e97ae5e-35ab-41e9-aa03-ad060bbbd676\") " pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056777 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-trusted-ca-bundle\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056800 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-encryption-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056827 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvf82\" (UniqueName: \"kubernetes.io/projected/4e8662e0-1de8-4371-8836-214a0394675c-kube-api-access-lvf82\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056851 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16240ac3-819b-4e68-bca9-c97c94599fbb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056876 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmh4c\" (UniqueName: \"kubernetes.io/projected/90a4381e-451b-4940-932a-efba1d101c81-kube-api-access-bmh4c\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056904 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgr5k\" (UniqueName: \"kubernetes.io/projected/7e230409-6e68-4f7c-b0c3-3e55433b22c1-kube-api-access-fgr5k\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056932 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8cba929a-19da-479b-b9fb-b4cffaaba4c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056951 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9twfs\" (UniqueName: \"kubernetes.io/projected/b50bda2b-e707-456e-af02-796b6d9a4cdf-kube-api-access-9twfs\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.056990 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2244349f-df5c-4813-a0e7-418a602f57b0-service-ca-bundle\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057019 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-metrics-tls\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057042 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057061 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e8f132b-916b-4973-9873-5919cb12251c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057081 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-oauth-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057099 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-encryption-config\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057119 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057201 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-etcd-client\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057223 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057240 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057272 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057296 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60ffdf66-0472-4e1f-9ea6-869acc338d0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057319 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x956c\" (UniqueName: \"kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057337 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-config\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057358 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4jhb\" (UniqueName: \"kubernetes.io/projected/49c456f9-6cbf-4e3c-992a-8636357253ad-kube-api-access-l4jhb\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057376 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057387 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-trusted-ca\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057396 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-audit\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057495 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-audit-dir\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057523 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-service-ca\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057552 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql48r\" (UniqueName: \"kubernetes.io/projected/b6a491f6-3829-4c9d-88cb-a49864576106-kube-api-access-ql48r\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057584 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057605 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qzff\" (UniqueName: \"kubernetes.io/projected/f56c1338-08c8-47de-b24a-3aaf85e315f8-kube-api-access-5qzff\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057629 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ffdf66-0472-4e1f-9ea6-869acc338d0e-config\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057662 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mf4jd\" (UniqueName: \"kubernetes.io/projected/2244349f-df5c-4813-a0e7-418a602f57b0-kube-api-access-mf4jd\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057682 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f56c1338-08c8-47de-b24a-3aaf85e315f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057703 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-oauth-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057721 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057744 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-default-certificate\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057774 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057791 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-trusted-ca\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057812 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c613148-89dd-4904-b721-c90f6a0f89ba-serving-cert\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057830 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b50bda2b-e707-456e-af02-796b6d9a4cdf-metrics-tls\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057849 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqkd6\" (UniqueName: \"kubernetes.io/projected/4c613148-89dd-4904-b721-c90f6a0f89ba-kube-api-access-qqkd6\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057867 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-serving-cert\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057881 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057890 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhc9z\" (UniqueName: \"kubernetes.io/projected/b8f2f610-05dc-49ea-882e-634d283b3caa-kube-api-access-dhc9z\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057926 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-console-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057946 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057964 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.057983 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-client\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058004 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058025 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-etcd-serving-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058042 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90a4381e-451b-4940-932a-efba1d101c81-machine-approver-tls\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058065 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058084 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpn25\" (UniqueName: \"kubernetes.io/projected/0fffe8f2-59b1-4215-809e-461bc8f5e386-kube-api-access-xpn25\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058111 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-config\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058143 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058165 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058187 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmpf6\" (UniqueName: \"kubernetes.io/projected/8cba929a-19da-479b-b9fb-b4cffaaba4c2-kube-api-access-vmpf6\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058210 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xtn8\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-kube-api-access-2xtn8\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058228 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ffdf66-0472-4e1f-9ea6-869acc338d0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058245 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-image-import-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058264 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-metrics-certs\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058281 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-config\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058314 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79a96518-940a-4490-9067-9e2f873753f7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058336 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4p7b\" (UniqueName: \"kubernetes.io/projected/79a96518-940a-4490-9067-9e2f873753f7-kube-api-access-l4p7b\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058361 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058406 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njhj5\" (UniqueName: \"kubernetes.io/projected/16240ac3-819b-4e68-bca9-c97c94599fbb-kube-api-access-njhj5\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058421 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-auth-proxy-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058429 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058499 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-audit-dir\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058510 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-images\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.058548 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.059540 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-images\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.059714 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-audit\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.060103 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90a4381e-451b-4940-932a-efba1d101c81-config\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.060895 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2244349f-df5c-4813-a0e7-418a602f57b0-service-ca-bundle\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.061437 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.061471 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h6jgn"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.061482 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5zvdg"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.061741 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-config\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.063272 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.067209 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-etcd-client\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.067467 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-95tmb"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.069286 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-oauth-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.071487 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-console-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.071583 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-serving-cert\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.071947 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-etcd-serving-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.072265 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c613148-89dd-4904-b721-c90f6a0f89ba-config\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.074404 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.075531 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.075620 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e8662e0-1de8-4371-8836-214a0394675c-node-pullsecrets\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.075756 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.075807 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zjf9d"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.075992 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-service-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.076822 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-default-certificate\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.077299 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.077310 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/222d5540-6b86-404a-b787-ea6a6043206a-etcd-ca\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.077426 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.077926 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16240ac3-819b-4e68-bca9-c97c94599fbb-config\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.077931 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-image-import-ca\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.078061 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c613148-89dd-4904-b721-c90f6a0f89ba-serving-cert\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.078251 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.078948 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079288 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e8662e0-1de8-4371-8836-214a0394675c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079330 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079356 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079371 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079484 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.079865 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f2f610-05dc-49ea-882e-634d283b3caa-config\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.080273 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-encryption-config\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.080288 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4e8662e0-1de8-4371-8836-214a0394675c-etcd-client\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.080620 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.080659 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.081012 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16240ac3-819b-4e68-bca9-c97c94599fbb-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.081142 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.081331 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.081552 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.082418 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-metrics-certs\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.082484 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.083281 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/79a96518-940a-4490-9067-9e2f873753f7-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.083505 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/2244349f-df5c-4813-a0e7-418a602f57b0-stats-auth\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.083611 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.083701 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/8cba929a-19da-479b-b9fb-b4cffaaba4c2-available-featuregates\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.083790 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/b50bda2b-e707-456e-af02-796b6d9a4cdf-metrics-tls\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.084190 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222d5540-6b86-404a-b787-ea6a6043206a-serving-cert\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.084667 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f2f610-05dc-49ea-882e-634d283b3caa-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.084686 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.085683 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.086694 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.087822 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-5s2mh"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.087982 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.088755 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.088773 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cba929a-19da-479b-b9fb-b4cffaaba4c2-serving-cert\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.089477 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.097314 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pjjgh"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.097460 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.100454 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-trusted-ca-bundle\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.101188 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.102973 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.105847 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.106983 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.108284 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.109302 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/90a4381e-451b-4940-932a-efba1d101c81-machine-approver-tls\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.114087 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.116269 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.118014 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.119343 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/60ffdf66-0472-4e1f-9ea6-869acc338d0e-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.119467 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qgqfk"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.120693 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5s2mh"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.122303 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.122873 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-k7fhc"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.124026 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-wfcjp"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.125021 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.125274 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pjjgh"] Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.129556 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60ffdf66-0472-4e1f-9ea6-869acc338d0e-config\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.144685 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.156898 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-oauth-config\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.159723 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e230409-6e68-4f7c-b0c3-3e55433b22c1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.159868 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-policies\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.159958 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160050 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgmn4\" (UniqueName: \"kubernetes.io/projected/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-kube-api-access-dgmn4\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160153 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzfhp\" (UniqueName: \"kubernetes.io/projected/0e8ad493-9466-46d8-8307-13f24463f184-kube-api-access-pzfhp\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160249 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-images\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160324 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-proxy-tls\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160468 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-tmpfs\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160592 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160739 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgr5k\" (UniqueName: \"kubernetes.io/projected/7e230409-6e68-4f7c-b0c3-3e55433b22c1-kube-api-access-fgr5k\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160867 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-metrics-tls\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.160976 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161067 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e8f132b-916b-4973-9873-5919cb12251c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161171 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-encryption-config\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161248 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161319 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161425 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161529 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql48r\" (UniqueName: \"kubernetes.io/projected/b6a491f6-3829-4c9d-88cb-a49864576106-kube-api-access-ql48r\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161622 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qzff\" (UniqueName: \"kubernetes.io/projected/f56c1338-08c8-47de-b24a-3aaf85e315f8-kube-api-access-5qzff\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161724 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f56c1338-08c8-47de-b24a-3aaf85e315f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161806 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161917 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-trusted-ca\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162002 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-serving-cert\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.161274 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-tmpfs\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162193 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-client\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162280 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162378 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpn25\" (UniqueName: \"kubernetes.io/projected/0fffe8f2-59b1-4215-809e-461bc8f5e386-kube-api-access-xpn25\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162506 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162601 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162274 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-auth-proxy-config\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162680 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162801 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162846 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9p5g\" (UniqueName: \"kubernetes.io/projected/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-kube-api-access-j9p5g\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162892 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162913 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162941 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7brwl\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-kube-api-access-7brwl\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162971 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e8f132b-916b-4973-9873-5919cb12251c-config\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.162993 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e8f132b-916b-4973-9873-5919cb12251c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.163035 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-dir\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.163064 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.163092 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx5zf\" (UniqueName: \"kubernetes.io/projected/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-kube-api-access-jx5zf\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.163121 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.163313 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-dir\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.168882 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c456f9-6cbf-4e3c-992a-8636357253ad-console-serving-cert\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.182844 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.190229 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/49c456f9-6cbf-4e3c-992a-8636357253ad-service-ca\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.202982 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.223570 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.231891 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-images\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.242907 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.262238 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.274392 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-proxy-tls\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.282517 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.312328 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.324531 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.324827 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-trusted-ca\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.343665 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.362559 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.374457 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-metrics-tls\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.382979 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.403105 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.423984 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.453492 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.463421 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.483336 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.503819 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.523175 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.535546 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e8f132b-916b-4973-9873-5919cb12251c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.543735 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.563862 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.582886 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.585169 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e8f132b-916b-4973-9873-5919cb12251c-config\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.603738 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.624228 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.644564 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.663921 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.683591 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.703226 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.718335 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-client\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.723745 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.736184 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-serving-cert\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.743500 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.756633 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fffe8f2-59b1-4215-809e-461bc8f5e386-encryption-config\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.764913 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.783662 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.803031 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.823323 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.842887 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.851413 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.863263 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.871984 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-audit-policies\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.883981 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.894306 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fffe8f2-59b1-4215-809e-461bc8f5e386-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.903024 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.921192 4947 request.go:700] Waited for 1.012516009s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-oauth-apiserver/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.922651 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.944037 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.964758 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.976223 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/f56c1338-08c8-47de-b24a-3aaf85e315f8-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:37 crc kubenswrapper[4947]: I0125 00:11:37.983393 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.030937 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.031185 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.038969 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-profile-collector-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.042668 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.063072 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.083342 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.089452 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.089522 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.089627 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.103895 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.124081 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.134550 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7e230409-6e68-4f7c-b0c3-3e55433b22c1-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.144248 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161597 4947 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161724 4947 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161764 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert podName:17242dc8-e334-406d-ad0a-5dc9ecdf0d6a nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.66172451 +0000 UTC m=+137.894714970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert") pod "packageserver-d55dfcdfc-bg9x9" (UID: "17242dc8-e334-406d-ad0a-5dc9ecdf0d6a") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161838 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert podName:17242dc8-e334-406d-ad0a-5dc9ecdf0d6a nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.661802582 +0000 UTC m=+137.894793052 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert") pod "packageserver-d55dfcdfc-bg9x9" (UID: "17242dc8-e334-406d-ad0a-5dc9ecdf0d6a") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161979 4947 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.161985 4947 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.162080 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert podName:b6a491f6-3829-4c9d-88cb-a49864576106 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.662053948 +0000 UTC m=+137.895044428 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert") pod "olm-operator-6b444d44fb-nkrs8" (UID: "b6a491f6-3829-4c9d-88cb-a49864576106") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.162235 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key podName:c04cc1eb-ec23-4876-afd1-f123c04cdc8a nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.662093329 +0000 UTC m=+137.895083809 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key") pod "service-ca-9c57cc56f-lz644" (UID: "c04cc1eb-ec23-4876-afd1-f123c04cdc8a") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163182 4947 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163206 4947 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163269 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config podName:0e8ad493-9466-46d8-8307-13f24463f184 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.66324595 +0000 UTC m=+137.896236640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config") pod "service-ca-operator-777779d784-n4bqp" (UID: "0e8ad493-9466-46d8-8307-13f24463f184") : failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163314 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert podName:4ec7126b-b0f9-4fff-a11f-76726ce4c4ff nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.663294611 +0000 UTC m=+137.896285091 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-2prvv" (UID: "4ec7126b-b0f9-4fff-a11f-76726ce4c4ff") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163328 4947 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.163455 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert podName:0e8ad493-9466-46d8-8307-13f24463f184 nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.663401984 +0000 UTC m=+137.896392454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert") pod "service-ca-operator-777779d784-n4bqp" (UID: "0e8ad493-9466-46d8-8307-13f24463f184") : failed to sync secret cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.164608 4947 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.164680 4947 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.164752 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config podName:4ec7126b-b0f9-4fff-a11f-76726ce4c4ff nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.664723098 +0000 UTC m=+137.897713578 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-2prvv" (UID: "4ec7126b-b0f9-4fff-a11f-76726ce4c4ff") : failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: E0125 00:11:38.164789 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle podName:c04cc1eb-ec23-4876-afd1-f123c04cdc8a nodeName:}" failed. No retries permitted until 2026-01-25 00:11:38.664767229 +0000 UTC m=+137.897757699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle") pod "service-ca-9c57cc56f-lz644" (UID: "c04cc1eb-ec23-4876-afd1-f123c04cdc8a") : failed to sync configmap cache: timed out waiting for the condition Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.164792 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.183729 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.202109 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.223664 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.243518 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.265787 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.282972 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.303754 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.323866 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.343195 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.364366 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.383745 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.404707 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.424791 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.443757 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.463084 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.513956 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbbs4\" (UniqueName: \"kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4\") pod \"oauth-openshift-558db77b4-dmsjj\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.538186 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tm9wz\" (UniqueName: \"kubernetes.io/projected/bf8b174c-d1eb-4a2d-88c2-113302fa2300-kube-api-access-tm9wz\") pod \"authentication-operator-69f744f599-nt9qq\" (UID: \"bf8b174c-d1eb-4a2d-88c2-113302fa2300\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.544164 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.561495 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.566812 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.583641 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.604612 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.606444 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.624281 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.645043 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.668699 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.683255 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691073 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691164 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691218 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691311 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691368 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691392 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691460 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691505 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.691616 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.693775 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-cabundle\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.694603 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.698463 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-webhook-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.698979 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-apiservice-cert\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.701007 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6a491f6-3829-4c9d-88cb-a49864576106-srv-cert\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.705006 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-signing-key\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.708995 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.710992 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e8ad493-9466-46d8-8307-13f24463f184-serving-cert\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.745028 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9twfs\" (UniqueName: \"kubernetes.io/projected/b50bda2b-e707-456e-af02-796b6d9a4cdf-kube-api-access-9twfs\") pod \"dns-operator-744455d44c-zjf9d\" (UID: \"b50bda2b-e707-456e-af02-796b6d9a4cdf\") " pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.767300 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x956c\" (UniqueName: \"kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c\") pod \"route-controller-manager-6576b87f9c-vw66z\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.784232 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4jhb\" (UniqueName: \"kubernetes.io/projected/49c456f9-6cbf-4e3c-992a-8636357253ad-kube-api-access-l4jhb\") pod \"console-f9d7485db-95tmb\" (UID: \"49c456f9-6cbf-4e3c-992a-8636357253ad\") " pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.801410 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhc9z\" (UniqueName: \"kubernetes.io/projected/b8f2f610-05dc-49ea-882e-634d283b3caa-kube-api-access-dhc9z\") pod \"machine-api-operator-5694c8668f-h6jgn\" (UID: \"b8f2f610-05dc-49ea-882e-634d283b3caa\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.818275 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mf4jd\" (UniqueName: \"kubernetes.io/projected/2244349f-df5c-4813-a0e7-418a602f57b0-kube-api-access-mf4jd\") pod \"router-default-5444994796-5nscb\" (UID: \"2244349f-df5c-4813-a0e7-418a602f57b0\") " pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.838481 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4p7b\" (UniqueName: \"kubernetes.io/projected/79a96518-940a-4490-9067-9e2f873753f7-kube-api-access-l4p7b\") pod \"cluster-samples-operator-665b6dd947-25dw7\" (UID: \"79a96518-940a-4490-9067-9e2f873753f7\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.862283 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njhj5\" (UniqueName: \"kubernetes.io/projected/16240ac3-819b-4e68-bca9-c97c94599fbb-kube-api-access-njhj5\") pod \"openshift-apiserver-operator-796bbdcf4f-tq4ph\" (UID: \"16240ac3-819b-4e68-bca9-c97c94599fbb\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.868250 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.884574 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqkd6\" (UniqueName: \"kubernetes.io/projected/4c613148-89dd-4904-b721-c90f6a0f89ba-kube-api-access-qqkd6\") pod \"console-operator-58897d9998-xwjmr\" (UID: \"4c613148-89dd-4904-b721-c90f6a0f89ba\") " pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.887462 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.895879 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.901278 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6msz\" (UniqueName: \"kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz\") pod \"image-pruner-29488320-jf979\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.906783 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.925649 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.926473 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.929702 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmpf6\" (UniqueName: \"kubernetes.io/projected/8cba929a-19da-479b-b9fb-b4cffaaba4c2-kube-api-access-vmpf6\") pod \"openshift-config-operator-7777fb866f-5dh8t\" (UID: \"8cba929a-19da-479b-b9fb-b4cffaaba4c2\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.943312 4947 request.go:700] Waited for 1.866333379s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager-operator/serviceaccounts/kube-controller-manager-operator/token Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.946179 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xtn8\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-kube-api-access-2xtn8\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.964998 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/60ffdf66-0472-4e1f-9ea6-869acc338d0e-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-spsw9\" (UID: \"60ffdf66-0472-4e1f-9ea6-869acc338d0e\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:38 crc kubenswrapper[4947]: I0125 00:11:38.993940 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.005963 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4qnc\" (UniqueName: \"kubernetes.io/projected/222d5540-6b86-404a-b787-ea6a6043206a-kube-api-access-s4qnc\") pod \"etcd-operator-b45778765-2plqs\" (UID: \"222d5540-6b86-404a-b787-ea6a6043206a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.019052 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsx9g\" (UniqueName: \"kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g\") pod \"controller-manager-879f6c89f-5nql4\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.022740 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.043504 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mtfs\" (UniqueName: \"kubernetes.io/projected/eaa67d1d-92d7-41aa-b72f-aee9bca370fc-kube-api-access-9mtfs\") pod \"openshift-controller-manager-operator-756b6f6bc6-cmngb\" (UID: \"eaa67d1d-92d7-41aa-b72f-aee9bca370fc\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.045886 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5nscb" event={"ID":"2244349f-df5c-4813-a0e7-418a602f57b0","Type":"ContainerStarted","Data":"c9c497f538fb0eb148a67a905e3be1557ae1075ee9155bbbe0a049da1f40ded4"} Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.055419 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.082951 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b5f3c960-a56e-4c0e-82da-c8a39167eb8b-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-qxqhz\" (UID: \"b5f3c960-a56e-4c0e-82da-c8a39167eb8b\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.097400 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvf82\" (UniqueName: \"kubernetes.io/projected/4e8662e0-1de8-4371-8836-214a0394675c-kube-api-access-lvf82\") pod \"apiserver-76f77b778f-7kcc9\" (UID: \"4e8662e0-1de8-4371-8836-214a0394675c\") " pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.103714 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.123031 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.141519 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.142817 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.162941 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.190301 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.191710 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.203753 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.223559 4947 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.235449 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.243382 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.263534 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.273750 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.282900 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.283741 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.304582 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.318384 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.339218 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.351777 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgmn4\" (UniqueName: \"kubernetes.io/projected/c04cc1eb-ec23-4876-afd1-f123c04cdc8a-kube-api-access-dgmn4\") pod \"service-ca-9c57cc56f-lz644\" (UID: \"c04cc1eb-ec23-4876-afd1-f123c04cdc8a\") " pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.361865 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-lz644" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.372449 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzfhp\" (UniqueName: \"kubernetes.io/projected/0e8ad493-9466-46d8-8307-13f24463f184-kube-api-access-pzfhp\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.389876 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgr5k\" (UniqueName: \"kubernetes.io/projected/7e230409-6e68-4f7c-b0c3-3e55433b22c1-kube-api-access-fgr5k\") pod \"package-server-manager-789f6589d5-tbnnc\" (UID: \"7e230409-6e68-4f7c-b0c3-3e55433b22c1\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.411341 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-bound-sa-token\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.432434 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql48r\" (UniqueName: \"kubernetes.io/projected/b6a491f6-3829-4c9d-88cb-a49864576106-kube-api-access-ql48r\") pod \"olm-operator-6b444d44fb-nkrs8\" (UID: \"b6a491f6-3829-4c9d-88cb-a49864576106\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.449772 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qzff\" (UniqueName: \"kubernetes.io/projected/f56c1338-08c8-47de-b24a-3aaf85e315f8-kube-api-access-5qzff\") pod \"control-plane-machine-set-operator-78cbb6b69f-vwtw5\" (UID: \"f56c1338-08c8-47de-b24a-3aaf85e315f8\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.462467 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpn25\" (UniqueName: \"kubernetes.io/projected/0fffe8f2-59b1-4215-809e-461bc8f5e386-kube-api-access-xpn25\") pod \"apiserver-7bbb656c7d-j5vnk\" (UID: \"0fffe8f2-59b1-4215-809e-461bc8f5e386\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.492344 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7brwl\" (UniqueName: \"kubernetes.io/projected/d41f4b03-bff1-4ba1-a54b-b3e78014ecb8-kube-api-access-7brwl\") pod \"ingress-operator-5b745b69d9-tsj7s\" (UID: \"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.514248 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9p5g\" (UniqueName: \"kubernetes.io/projected/633ed9d6-a859-48e4-a100-f8e5bb3d6cfd-kube-api-access-j9p5g\") pod \"machine-config-operator-74547568cd-cms5v\" (UID: \"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.531712 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ec7126b-b0f9-4fff-a11f-76726ce4c4ff-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-2prvv\" (UID: \"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.543329 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.544224 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx5zf\" (UniqueName: \"kubernetes.io/projected/17242dc8-e334-406d-ad0a-5dc9ecdf0d6a-kube-api-access-jx5zf\") pod \"packageserver-d55dfcdfc-bg9x9\" (UID: \"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.552179 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.564362 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.570798 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1e8f132b-916b-4973-9873-5919cb12251c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-glppw\" (UID: \"1e8f132b-916b-4973-9873-5919cb12251c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.583843 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.599927 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.603858 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.614577 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.624025 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.631834 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:39 crc kubenswrapper[4947]: I0125 00:11:39.640655 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.162414 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6xwl\" (UniqueName: \"kubernetes.io/projected/0e97ae5e-35ab-41e9-aa03-ad060bbbd676-kube-api-access-z6xwl\") pod \"downloads-7954f5f757-5zvdg\" (UID: \"0e97ae5e-35ab-41e9-aa03-ad060bbbd676\") " pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.162502 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.162881 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.163840 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.164663 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.164758 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.164826 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.164878 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.164999 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.167228 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.167351 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e8ad493-9466-46d8-8307-13f24463f184-config\") pod \"service-ca-operator-777779d784-n4bqp\" (UID: \"0e8ad493-9466-46d8-8307-13f24463f184\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.168377 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.181680 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:40.681586761 +0000 UTC m=+139.914577241 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.185537 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4b6\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.185807 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.213527 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmh4c\" (UniqueName: \"kubernetes.io/projected/90a4381e-451b-4940-932a-efba1d101c81-kube-api-access-bmh4c\") pod \"machine-approver-56656f9798-s2g88\" (UID: \"90a4381e-451b-4940-932a-efba1d101c81\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.248082 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.249452 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-nt9qq"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.250942 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.254350 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-h6jgn"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.255434 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-zjf9d"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.259326 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmsjj"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.296386 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.297022 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:40.796978213 +0000 UTC m=+140.029968683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297181 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297251 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297275 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297294 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nm6h\" (UniqueName: \"kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297314 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f441771b-d1ad-442b-b344-e321cd553fbc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297357 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297409 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297444 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkgt4\" (UniqueName: \"kubernetes.io/projected/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-kube-api-access-lkgt4\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297459 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-proxy-tls\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297480 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797s5\" (UniqueName: \"kubernetes.io/projected/ba74a9d5-0b44-4599-ac43-d117394771b0-kube-api-access-797s5\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297494 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-profile-collector-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297519 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297536 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4b6\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297553 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f441771b-d1ad-442b-b344-e321cd553fbc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297572 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vx2f\" (UniqueName: \"kubernetes.io/projected/f441771b-d1ad-442b-b344-e321cd553fbc-kube-api-access-6vx2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297588 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m8vg\" (UniqueName: \"kubernetes.io/projected/caf7d2fa-5195-4e91-b838-a33c9e281dc1-kube-api-access-4m8vg\") pod \"migrator-59844c95c7-cxmmw\" (UID: \"caf7d2fa-5195-4e91-b838-a33c9e281dc1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297607 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297626 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-srv-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297661 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297709 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxnl4\" (UniqueName: \"kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297758 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.297800 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.298357 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:40.798345029 +0000 UTC m=+140.031335459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.298917 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.298993 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.299843 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.300278 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.301779 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.306983 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.309288 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.322728 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4b6\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: W0125 00:11:40.342576 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf8b174c_d1eb_4a2d_88c2_113302fa2300.slice/crio-f13d4eaecedc3ea4b8275ea3df5dd2a15fe99a309dc244847fef98f9fa8c5fe1 WatchSource:0}: Error finding container f13d4eaecedc3ea4b8275ea3df5dd2a15fe99a309dc244847fef98f9fa8c5fe1: Status 404 returned error can't find the container with id f13d4eaecedc3ea4b8275ea3df5dd2a15fe99a309dc244847fef98f9fa8c5fe1 Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.342922 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: W0125 00:11:40.353447 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3a733c1_a1cf_42ef_a056_27185292354f.slice/crio-9fddec1b4c50133c39595d5ae85373dbb93cca3db14bf1f44dacfede0073d88d WatchSource:0}: Error finding container 9fddec1b4c50133c39595d5ae85373dbb93cca3db14bf1f44dacfede0073d88d: Status 404 returned error can't find the container with id 9fddec1b4c50133c39595d5ae85373dbb93cca3db14bf1f44dacfede0073d88d Jan 25 00:11:40 crc kubenswrapper[4947]: W0125 00:11:40.355446 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8f2f610_05dc_49ea_882e_634d283b3caa.slice/crio-70796f6459b96427a71a3610eb2dfdfd1b263546ab86ca924c1f2e01cde24bc8 WatchSource:0}: Error finding container 70796f6459b96427a71a3610eb2dfdfd1b263546ab86ca924c1f2e01cde24bc8: Status 404 returned error can't find the container with id 70796f6459b96427a71a3610eb2dfdfd1b263546ab86ca924c1f2e01cde24bc8 Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.391369 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.404094 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.404224 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:40.904202479 +0000 UTC m=+140.137192919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.404487 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-registration-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.404554 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.404582 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-socket-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.405770 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.405821 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9252\" (UniqueName: \"kubernetes.io/projected/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-kube-api-access-k9252\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.405859 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-989fv\" (UniqueName: \"kubernetes.io/projected/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-kube-api-access-989fv\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406019 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406085 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nm6h\" (UniqueName: \"kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406139 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f441771b-d1ad-442b-b344-e321cd553fbc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406707 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-node-bootstrap-token\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406848 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406872 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ddmc\" (UniqueName: \"kubernetes.io/projected/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-kube-api-access-9ddmc\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406933 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-cert\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.406973 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292sf\" (UniqueName: \"kubernetes.io/projected/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-kube-api-access-292sf\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407005 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407025 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-csi-data-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407043 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqcj\" (UniqueName: \"kubernetes.io/projected/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-kube-api-access-hjqcj\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407099 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-metrics-tls\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407176 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkgt4\" (UniqueName: \"kubernetes.io/projected/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-kube-api-access-lkgt4\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407213 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797s5\" (UniqueName: \"kubernetes.io/projected/ba74a9d5-0b44-4599-ac43-d117394771b0-kube-api-access-797s5\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407262 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-proxy-tls\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407282 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-certs\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407341 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-profile-collector-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407374 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407423 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f441771b-d1ad-442b-b344-e321cd553fbc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407457 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vx2f\" (UniqueName: \"kubernetes.io/projected/f441771b-d1ad-442b-b344-e321cd553fbc-kube-api-access-6vx2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407525 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m8vg\" (UniqueName: \"kubernetes.io/projected/caf7d2fa-5195-4e91-b838-a33c9e281dc1-kube-api-access-4m8vg\") pod \"migrator-59844c95c7-cxmmw\" (UID: \"caf7d2fa-5195-4e91-b838-a33c9e281dc1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407634 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-mountpoint-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407668 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-srv-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407747 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407789 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxnl4\" (UniqueName: \"kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407910 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-config-volume\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.407953 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.408080 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-plugins-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.409514 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:40.909498739 +0000 UTC m=+140.142489369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.410536 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f441771b-d1ad-442b-b344-e321cd553fbc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.412066 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.413994 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.415026 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.417424 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f441771b-d1ad-442b-b344-e321cd553fbc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.417647 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.418735 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-proxy-tls\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.419223 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-profile-collector-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.419669 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ba74a9d5-0b44-4599-ac43-d117394771b0-srv-cert\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.419685 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.439888 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkgt4\" (UniqueName: \"kubernetes.io/projected/8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf-kube-api-access-lkgt4\") pod \"machine-config-controller-84d6567774-tz9k4\" (UID: \"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.460049 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vx2f\" (UniqueName: \"kubernetes.io/projected/f441771b-d1ad-442b-b344-e321cd553fbc-kube-api-access-6vx2f\") pod \"kube-storage-version-migrator-operator-b67b599dd-4qcsn\" (UID: \"f441771b-d1ad-442b-b344-e321cd553fbc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.487304 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797s5\" (UniqueName: \"kubernetes.io/projected/ba74a9d5-0b44-4599-ac43-d117394771b0-kube-api-access-797s5\") pod \"catalog-operator-68c6474976-546b8\" (UID: \"ba74a9d5-0b44-4599-ac43-d117394771b0\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.487968 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.489876 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.500057 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxnl4\" (UniqueName: \"kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4\") pod \"marketplace-operator-79b997595-vx9fn\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508503 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508780 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-plugins-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508841 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-registration-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508873 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-socket-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508896 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9252\" (UniqueName: \"kubernetes.io/projected/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-kube-api-access-k9252\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508918 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-989fv\" (UniqueName: \"kubernetes.io/projected/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-kube-api-access-989fv\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.508972 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-node-bootstrap-token\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509008 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ddmc\" (UniqueName: \"kubernetes.io/projected/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-kube-api-access-9ddmc\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509041 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-cert\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509068 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-csi-data-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509092 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjqcj\" (UniqueName: \"kubernetes.io/projected/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-kube-api-access-hjqcj\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509116 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292sf\" (UniqueName: \"kubernetes.io/projected/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-kube-api-access-292sf\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509160 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-metrics-tls\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509184 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-certs\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509227 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-mountpoint-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509269 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-config-volume\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.509305 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.509594 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.009575377 +0000 UTC m=+140.242565817 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.510054 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-plugins-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.510110 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-registration-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.510167 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-socket-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.510567 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-mountpoint-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.510667 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-csi-data-dir\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.511395 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-config-volume\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.513852 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-metrics-tls\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.515183 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-certs\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.518849 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.521796 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-95tmb"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.522537 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-node-bootstrap-token\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.524739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-cert\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.527496 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m8vg\" (UniqueName: \"kubernetes.io/projected/caf7d2fa-5195-4e91-b838-a33c9e281dc1-kube-api-access-4m8vg\") pod \"migrator-59844c95c7-cxmmw\" (UID: \"caf7d2fa-5195-4e91-b838-a33c9e281dc1\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.562695 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nm6h\" (UniqueName: \"kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h\") pod \"collect-profiles-29488320-2ns8x\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.569619 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.571869 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.580499 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ddmc\" (UniqueName: \"kubernetes.io/projected/5a4acfb5-2387-48ae-8c78-9d8ab4d96628-kube-api-access-9ddmc\") pod \"dns-default-5s2mh\" (UID: \"5a4acfb5-2387-48ae-8c78-9d8ab4d96628\") " pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.601807 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.603362 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292sf\" (UniqueName: \"kubernetes.io/projected/d8ad697f-6550-4e6a-80ab-e877a5d6e2cf-kube-api-access-292sf\") pod \"machine-config-server-wfcjp\" (UID: \"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf\") " pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.610066 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.610569 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.11053516 +0000 UTC m=+140.343525600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.617623 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9252\" (UniqueName: \"kubernetes.io/projected/4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f-kube-api-access-k9252\") pod \"csi-hostpathplugin-pjjgh\" (UID: \"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f\") " pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.638233 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-989fv\" (UniqueName: \"kubernetes.io/projected/8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261-kube-api-access-989fv\") pod \"ingress-canary-qgqfk\" (UID: \"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261\") " pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.638492 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.655495 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-wfcjp" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.674063 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjqcj\" (UniqueName: \"kubernetes.io/projected/086f5a4b-235b-41a4-8bf6-75dd0626ba9e-kube-api-access-hjqcj\") pod \"multus-admission-controller-857f4d67dd-k7fhc\" (UID: \"086f5a4b-235b-41a4-8bf6-75dd0626ba9e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: W0125 00:11:40.697675 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90a4381e_451b_4940_932a_efba1d101c81.slice/crio-8081867ca83d0d3dba7706ba48debc4ce6aad588973a48266a6d034a2ba0cc6d WatchSource:0}: Error finding container 8081867ca83d0d3dba7706ba48debc4ce6aad588973a48266a6d034a2ba0cc6d: Status 404 returned error can't find the container with id 8081867ca83d0d3dba7706ba48debc4ce6aad588973a48266a6d034a2ba0cc6d Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.716939 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.725482 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.225423229 +0000 UTC m=+140.458413669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.725660 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.728079 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.228052137 +0000 UTC m=+140.461042567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.759886 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.776357 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.799400 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-2plqs"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.806809 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.828704 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.832763 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.332744417 +0000 UTC m=+140.565734857 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.849197 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-5zvdg"] Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.886611 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qgqfk" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.894289 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" Jan 25 00:11:40 crc kubenswrapper[4947]: I0125 00:11:40.933964 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:40 crc kubenswrapper[4947]: E0125 00:11:40.934293 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.434282455 +0000 UTC m=+140.667272895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.037329 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.037865 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.537850956 +0000 UTC m=+140.770841396 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.057550 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t"] Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.078045 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8"] Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.080378 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" event={"ID":"bf8b174c-d1eb-4a2d-88c2-113302fa2300","Type":"ContainerStarted","Data":"e8d5de3cacf1e240be8237587d653ef39e1763036e3e7632b23368ebfdb7b23d"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.080409 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" event={"ID":"bf8b174c-d1eb-4a2d-88c2-113302fa2300","Type":"ContainerStarted","Data":"f13d4eaecedc3ea4b8275ea3df5dd2a15fe99a309dc244847fef98f9fa8c5fe1"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.085583 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wfcjp" event={"ID":"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf","Type":"ContainerStarted","Data":"9ca8287e9c26ec0dfd3d73ca6519fc5e70777d2ad74263c7918c4242406f9b4e"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.086756 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-95tmb" event={"ID":"49c456f9-6cbf-4e3c-992a-8636357253ad","Type":"ContainerStarted","Data":"ed1637877ab587a3769bbb73e160f3ac9a77b12181696670448b35273a1631bd"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.102712 4947 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vw66z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.102760 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.139735 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.141717 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.641700774 +0000 UTC m=+140.874691214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155707 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155750 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" event={"ID":"222d5540-6b86-404a-b787-ea6a6043206a","Type":"ContainerStarted","Data":"1f85c504916b079d2fb54d456f76353ad645c3fb25746f3cdf54373957e57ba0"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155768 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" event={"ID":"16240ac3-819b-4e68-bca9-c97c94599fbb","Type":"ContainerStarted","Data":"bbedd4034703687c9b50b1c5783748e27c2d0500cbc6f6a796b4ead9976ebd8d"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155787 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" event={"ID":"16240ac3-819b-4e68-bca9-c97c94599fbb","Type":"ContainerStarted","Data":"f5375ebf7f4d9710bc386ea637ae73d1ddd57c341da724adc88f86c20f1e629e"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155833 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" event={"ID":"7e230409-6e68-4f7c-b0c3-3e55433b22c1","Type":"ContainerStarted","Data":"60e2a9246a0a5fe4602732b036d749d63160d445a75779df6146a329c452ffa3"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155844 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5zvdg" event={"ID":"0e97ae5e-35ab-41e9-aa03-ad060bbbd676","Type":"ContainerStarted","Data":"f856a317cec43d04d6a06e385253dff11efde283f4574e6eae2741b027f1dd0e"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155854 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" event={"ID":"37b7a00e-4def-4d1e-8333-94d15174223b","Type":"ContainerStarted","Data":"7403745f150d63c1345cf7e91fe3a3905bd0f9a92d392d4fa7f206d8e146d177"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155864 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" event={"ID":"37b7a00e-4def-4d1e-8333-94d15174223b","Type":"ContainerStarted","Data":"f071273158cef43a1913f83b87e712c39d955ed385651512fb2fd76cd5e1e89d"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155892 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" event={"ID":"d3a733c1-a1cf-42ef-a056-27185292354f","Type":"ContainerStarted","Data":"9fddec1b4c50133c39595d5ae85373dbb93cca3db14bf1f44dacfede0073d88d"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155903 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5nscb" event={"ID":"2244349f-df5c-4813-a0e7-418a602f57b0","Type":"ContainerStarted","Data":"ef36734da29446e8d9b2f83db38b77267a16fdb6bd73f6217a93feeb69974ada"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155914 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" event={"ID":"b50bda2b-e707-456e-af02-796b6d9a4cdf","Type":"ContainerStarted","Data":"f3f6cfce0cd0914f22fa73027204f70ae39cf141087dd5a03b2c06a180e391ce"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155925 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" event={"ID":"b8f2f610-05dc-49ea-882e-634d283b3caa","Type":"ContainerStarted","Data":"14485fd06b20211136c1bbbf3ca34f895cd859f8399c47a0300c020d2ee57a9b"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155934 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" event={"ID":"b8f2f610-05dc-49ea-882e-634d283b3caa","Type":"ContainerStarted","Data":"70796f6459b96427a71a3610eb2dfdfd1b263546ab86ca924c1f2e01cde24bc8"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.155943 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" event={"ID":"90a4381e-451b-4940-932a-efba1d101c81","Type":"ContainerStarted","Data":"8081867ca83d0d3dba7706ba48debc4ce6aad588973a48266a6d034a2ba0cc6d"} Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.240334 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.240999 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.740982902 +0000 UTC m=+140.973973342 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.342274 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.342842 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.842830588 +0000 UTC m=+141.075821028 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.451655 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.452042 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:41.952028137 +0000 UTC m=+141.185018577 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.553033 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.553327 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.053315628 +0000 UTC m=+141.286306068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.636840 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5nscb" podStartSLOduration=120.636826211 podStartE2EDuration="2m0.636826211s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:41.626789477 +0000 UTC m=+140.859779917" watchObservedRunningTime="2026-01-25 00:11:41.636826211 +0000 UTC m=+140.869816651" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.640513 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-tq4ph" podStartSLOduration=120.640472016 podStartE2EDuration="2m0.640472016s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:41.636384589 +0000 UTC m=+140.869375029" watchObservedRunningTime="2026-01-25 00:11:41.640472016 +0000 UTC m=+140.873462486" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.653619 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.654002 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.153983712 +0000 UTC m=+141.386974152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.791649 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.792165 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.292148621 +0000 UTC m=+141.525139061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.794077 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-nt9qq" podStartSLOduration=120.794059332 podStartE2EDuration="2m0.794059332s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:41.791398882 +0000 UTC m=+141.024389322" watchObservedRunningTime="2026-01-25 00:11:41.794059332 +0000 UTC m=+141.027049762" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.816794 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v"] Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.840630 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podStartSLOduration=119.840603054 podStartE2EDuration="1m59.840603054s" podCreationTimestamp="2026-01-25 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:41.835374996 +0000 UTC m=+141.068365436" watchObservedRunningTime="2026-01-25 00:11:41.840603054 +0000 UTC m=+141.073593494" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.846017 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9"] Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.887864 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.893105 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:41 crc kubenswrapper[4947]: E0125 00:11:41.897398 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.397378746 +0000 UTC m=+141.630369176 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:41 crc kubenswrapper[4947]: I0125 00:11:41.897917 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.000567 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.014393 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.514372649 +0000 UTC m=+141.747363089 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.022756 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:42 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:42 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:42 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.022811 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.107416 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.107927 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.607887146 +0000 UTC m=+141.840877586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.108113 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.108638 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.608590024 +0000 UTC m=+141.841580464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.138832 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" event={"ID":"60ffdf66-0472-4e1f-9ea6-869acc338d0e","Type":"ContainerStarted","Data":"285da055e2c500c80ae5fa04797fa960cb9225549accda7dd866c3858c4a310d"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.147354 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" event={"ID":"8cba929a-19da-479b-b9fb-b4cffaaba4c2","Type":"ContainerStarted","Data":"11142abbc4087cf0856125625243d08d9a1f8c11e5c6dfa765096871226454ab"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.179245 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" event={"ID":"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd","Type":"ContainerStarted","Data":"d25566e3fd8885032b5a026027533f468cd438c216fc1a989681c6c296759918"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.186218 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" event={"ID":"b50bda2b-e707-456e-af02-796b6d9a4cdf","Type":"ContainerStarted","Data":"34c3bbc616f663d4b633400d99bde35312eef4d825a230ae1fe5a5ecc109aa10"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.199832 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" event={"ID":"b6a491f6-3829-4c9d-88cb-a49864576106","Type":"ContainerStarted","Data":"661eabb87f9473752ade357791459fc582b07355ece06fe48f097a1b6de947ee"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.210691 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.211018 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.711001435 +0000 UTC m=+141.943991875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.227925 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-95tmb" event={"ID":"49c456f9-6cbf-4e3c-992a-8636357253ad","Type":"ContainerStarted","Data":"afaa14d9f4c5b6f7cfac73b5c2aa6efec5df81648ad324830ffab9f1d41d01af"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.248717 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-95tmb" podStartSLOduration=121.248676895 podStartE2EDuration="2m1.248676895s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:42.246947059 +0000 UTC m=+141.479937499" watchObservedRunningTime="2026-01-25 00:11:42.248676895 +0000 UTC m=+141.481684265" Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.263381 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" event={"ID":"b5f3c960-a56e-4c0e-82da-c8a39167eb8b","Type":"ContainerStarted","Data":"35a32c22c8b7368e9288b40b6674d6db578e16ad1144c4ef60f720f913592f67"} Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.270866 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.272693 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" podStartSLOduration=121.272682715 podStartE2EDuration="2m1.272682715s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:42.271312749 +0000 UTC m=+141.504303199" watchObservedRunningTime="2026-01-25 00:11:42.272682715 +0000 UTC m=+141.505673155" Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.306750 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.317505 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.319459 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.819444373 +0000 UTC m=+142.052434813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.326600 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.337979 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-xwjmr"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.343723 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.352426 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-pruner-29488320-jf979"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.370098 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.370157 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.407667 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-lz644"] Jan 25 00:11:42 crc kubenswrapper[4947]: W0125 00:11:42.411023 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e4f53a6_fcc3_4310_965d_9a5dda91080b.slice/crio-3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc WatchSource:0}: Error finding container 3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc: Status 404 returned error can't find the container with id 3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.412967 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.413979 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.417449 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.417493 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.418983 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.419270 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:42.919256585 +0000 UTC m=+142.152247025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.429109 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-7kcc9"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.442280 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.480447 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.484026 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.487950 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-5s2mh"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.506893 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.524185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.524613 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.024591453 +0000 UTC m=+142.257581893 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: W0125 00:11:42.583871 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ec7126b_b0f9_4fff_a11f_76726ce4c4ff.slice/crio-d9d3aef62d4d263918f652131fa6967d8214cecb1ec212c08034ef4732825bbb WatchSource:0}: Error finding container d9d3aef62d4d263918f652131fa6967d8214cecb1ec212c08034ef4732825bbb: Status 404 returned error can't find the container with id d9d3aef62d4d263918f652131fa6967d8214cecb1ec212c08034ef4732825bbb Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.625435 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.626028 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.126005287 +0000 UTC m=+142.358995727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.655996 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.665396 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.680382 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-k7fhc"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.690991 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.695209 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pjjgh"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.726417 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.727149 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.227114013 +0000 UTC m=+142.460104513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.736351 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qgqfk"] Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.828092 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.828547 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.328531788 +0000 UTC m=+142.561522228 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:42 crc kubenswrapper[4947]: W0125 00:11:42.837577 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod086f5a4b_235b_41a4_8bf6_75dd0626ba9e.slice/crio-4323424913b00ce8d99f2455bd14a6573c7eab964c062141654eeeb744599d56 WatchSource:0}: Error finding container 4323424913b00ce8d99f2455bd14a6573c7eab964c062141654eeeb744599d56: Status 404 returned error can't find the container with id 4323424913b00ce8d99f2455bd14a6573c7eab964c062141654eeeb744599d56 Jan 25 00:11:42 crc kubenswrapper[4947]: W0125 00:11:42.870629 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fc6ed9e_cdc4_4908_8cd6_c75a12d1f261.slice/crio-c368fb44ef17ba57b9d54859c7b301317ad8712c3f5aeac6a9ae3afc23e46c37 WatchSource:0}: Error finding container c368fb44ef17ba57b9d54859c7b301317ad8712c3f5aeac6a9ae3afc23e46c37: Status 404 returned error can't find the container with id c368fb44ef17ba57b9d54859c7b301317ad8712c3f5aeac6a9ae3afc23e46c37 Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.896861 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:42 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:42 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:42 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.896906 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:42 crc kubenswrapper[4947]: I0125 00:11:42.931117 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:42 crc kubenswrapper[4947]: E0125 00:11:42.931693 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.431671626 +0000 UTC m=+142.664662066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.036865 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.037234 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.53721887 +0000 UTC m=+142.770209310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.138703 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.139462 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.639434935 +0000 UTC m=+142.872425375 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.240096 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.240839 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.740807238 +0000 UTC m=+142.973797678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.342771 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" event={"ID":"90a4381e-451b-4940-932a-efba1d101c81","Type":"ContainerStarted","Data":"a36d33d7eff1fe5b9c921481f4aec9414cff6a93439985e41b2e91134f149afc"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.343532 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.343824 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.843814034 +0000 UTC m=+143.076804474 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.399245 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-5zvdg" event={"ID":"0e97ae5e-35ab-41e9-aa03-ad060bbbd676","Type":"ContainerStarted","Data":"817ff4963afa0de4cafefe1e9fd6d7e500d10fc898b7b9e0a7cbb3dfb9773581"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.400876 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.422482 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.422525 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-5zvdg" podStartSLOduration=122.422510861 podStartE2EDuration="2m2.422510861s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.41867801 +0000 UTC m=+142.651668450" watchObservedRunningTime="2026-01-25 00:11:43.422510861 +0000 UTC m=+142.655501301" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.422542 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.444163 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.454407 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:43.954388409 +0000 UTC m=+143.187378849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.468318 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" event={"ID":"1e8f132b-916b-4973-9873-5919cb12251c","Type":"ContainerStarted","Data":"be7e342f3e6d6598d889734cd4187a95011471b87ee92c05cd69ab47fb74ee54"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.470323 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" event={"ID":"d3a733c1-a1cf-42ef-a056-27185292354f","Type":"ContainerStarted","Data":"e90e55f834d170a9f2751b73e12c28d7d7c3fcc4793df05b84ce36f32d19cba4"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.472163 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.499774 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" event={"ID":"9d15a018-8297-4e45-8a62-afa89a267381","Type":"ContainerStarted","Data":"f8461de37c95d2f68aceb775dc7fe3de76c76f12ffef68dfcf5d69ee41e8f1e4"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.501623 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" event={"ID":"c4619229-a3d7-401d-92d8-b1195e6e08f8","Type":"ContainerStarted","Data":"6f773596799deeb0c9b3c47a4dba49661e5faca82042d0d785f8b4d7e9bba2d2"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.504911 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.507496 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lz644" event={"ID":"c04cc1eb-ec23-4876-afd1-f123c04cdc8a","Type":"ContainerStarted","Data":"89aa9541b64237d49d9bc55b5e74e337ff03346683acd2bc16a98156f6d5fe57"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.512437 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" event={"ID":"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f","Type":"ContainerStarted","Data":"d3660274792703781a26567ec3825efc2393b9fdf7a7bfa82de2ed330954e1da"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.516732 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" event={"ID":"4e8662e0-1de8-4371-8836-214a0394675c","Type":"ContainerStarted","Data":"ce3ed48265c3712ee5171989380cd46457db282a272ad3e4b900c3141a69cfa4"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.521336 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" event={"ID":"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf","Type":"ContainerStarted","Data":"be500b11f2cd414cadc977f7f28f2d93ff6ad18aba9179114a7d7b0e2a2b6c30"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.530496 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" podStartSLOduration=122.530432376 podStartE2EDuration="2m2.530432376s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.506242441 +0000 UTC m=+142.739232901" watchObservedRunningTime="2026-01-25 00:11:43.530432376 +0000 UTC m=+142.763422816" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.530793 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerStarted","Data":"2742ff9aa7f3cbee3d8389c7f258cc4ce04fcb1e9943ebf713523dc12c66fb09"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.548675 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" event={"ID":"b50bda2b-e707-456e-af02-796b6d9a4cdf","Type":"ContainerStarted","Data":"571d7c89f2ff8bfac00aa8998ff9571361fa2e5332cb8ec273da69ee149b8245"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.557556 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" podStartSLOduration=122.557539568 podStartE2EDuration="2m2.557539568s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.556320876 +0000 UTC m=+142.789311346" watchObservedRunningTime="2026-01-25 00:11:43.557539568 +0000 UTC m=+142.790530008" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.586247 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.586991 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.086976501 +0000 UTC m=+143.319966941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.593580 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-zjf9d" podStartSLOduration=122.593565105 podStartE2EDuration="2m2.593565105s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.59145864 +0000 UTC m=+142.824449080" watchObservedRunningTime="2026-01-25 00:11:43.593565105 +0000 UTC m=+142.826555545" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.607423 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" event={"ID":"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8","Type":"ContainerStarted","Data":"b14687a0e3244999df330ccd307ba7acf30a5c1eeff21a579306144b475ba875"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.628674 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-wfcjp" event={"ID":"d8ad697f-6550-4e6a-80ab-e877a5d6e2cf","Type":"ContainerStarted","Data":"b5cba42e090732996c9ef825da7bbacbe888729641b141b7cb69f372f8556eb3"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.632928 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" event={"ID":"f441771b-d1ad-442b-b344-e321cd553fbc","Type":"ContainerStarted","Data":"3aa6854d99328d07f27b06bd8ddea80bd6297537355742f8b0e61d1c29092e6f"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.650309 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" event={"ID":"0fffe8f2-59b1-4215-809e-461bc8f5e386","Type":"ContainerStarted","Data":"acb1267a56efe44bfa2cb5992c2fcabc5a8bcbfcf18e59301eb07e360c1b2f9c"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.652971 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" event={"ID":"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd","Type":"ContainerStarted","Data":"37463940d19d8a83e4bd134f454542acb0204c8644b08b9ae011d12fbc46efcc"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.652995 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" event={"ID":"633ed9d6-a859-48e4-a100-f8e5bb3d6cfd","Type":"ContainerStarted","Data":"3212d7fa51d2ce609b2adee39bb69b36b2894ec6d8bc82315a710df74b3ff72b"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.695410 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.697268 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.197253629 +0000 UTC m=+143.430244069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.747043 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" event={"ID":"ba74a9d5-0b44-4599-ac43-d117394771b0","Type":"ContainerStarted","Data":"9fa9b5f260a3ea017cbfafd5256d165def4e9e4a5e48b8e1a8146eede83ef7f3"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.747960 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.759109 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-wfcjp" podStartSLOduration=6.759093633 podStartE2EDuration="6.759093633s" podCreationTimestamp="2026-01-25 00:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.739314783 +0000 UTC m=+142.972305223" watchObservedRunningTime="2026-01-25 00:11:43.759093633 +0000 UTC m=+142.992084073" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.806122 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.806403 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.306392376 +0000 UTC m=+143.539382816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.833171 4947 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-546b8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.833218 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" podUID="ba74a9d5-0b44-4599-ac43-d117394771b0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.833639 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-h6jgn" event={"ID":"b8f2f610-05dc-49ea-882e-634d283b3caa","Type":"ContainerStarted","Data":"fa173684fe2ea0de903aca7a66dcb95f0716e4d5290105ac3dc2d73458fe2028"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.835256 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-cms5v" podStartSLOduration=122.835238704 podStartE2EDuration="2m2.835238704s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.759489654 +0000 UTC m=+142.992480094" watchObservedRunningTime="2026-01-25 00:11:43.835238704 +0000 UTC m=+143.068229144" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.850351 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" event={"ID":"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff","Type":"ContainerStarted","Data":"d9d3aef62d4d263918f652131fa6967d8214cecb1ec212c08034ef4732825bbb"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.852696 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5s2mh" event={"ID":"5a4acfb5-2387-48ae-8c78-9d8ab4d96628","Type":"ContainerStarted","Data":"bd054f659c7a993f372cba8e90f34fe7069ef3f73f76798f32719317745eea54"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.856164 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" event={"ID":"222d5540-6b86-404a-b787-ea6a6043206a","Type":"ContainerStarted","Data":"4e18bb53f3e6e6047cbc3494806da0bab8e691cfa23ed21cd0af462e1ed9ad06"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.870172 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" event={"ID":"7e230409-6e68-4f7c-b0c3-3e55433b22c1","Type":"ContainerStarted","Data":"c212c05762707e9591a6f1a88f4b3d922776a03021787825cb91d5c7499a5fa9"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.870546 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" event={"ID":"7e230409-6e68-4f7c-b0c3-3e55433b22c1","Type":"ContainerStarted","Data":"b1a00459e0656c1b3f67c3ef541ca7ffe8700b8b1227a65cc882f2f402504a5c"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.871199 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.873618 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" event={"ID":"086f5a4b-235b-41a4-8bf6-75dd0626ba9e","Type":"ContainerStarted","Data":"4323424913b00ce8d99f2455bd14a6573c7eab964c062141654eeeb744599d56"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.874524 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" event={"ID":"b6a491f6-3829-4c9d-88cb-a49864576106","Type":"ContainerStarted","Data":"76abab8158db7b6e9d610264d5a7b58e562dbe5b03e63e0a9a327716a5f2eaff"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.875323 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.885024 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29488320-jf979" event={"ID":"9e4f53a6-fcc3-4310-965d-9a5dda91080b","Type":"ContainerStarted","Data":"3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.889521 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" event={"ID":"0e8ad493-9466-46d8-8307-13f24463f184","Type":"ContainerStarted","Data":"a9e32a740fc4bcaaf5be83c4412435dfc74afa6b38c157a1c647ec866181930d"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.892593 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" event={"ID":"60ffdf66-0472-4e1f-9ea6-869acc338d0e","Type":"ContainerStarted","Data":"e504adc83ba76b0afa9e4c18829ab3f97bcd549a99543ac06da8bfb4ecf4306a"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.894187 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qgqfk" event={"ID":"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261","Type":"ContainerStarted","Data":"c368fb44ef17ba57b9d54859c7b301317ad8712c3f5aeac6a9ae3afc23e46c37"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.894394 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:43 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:43 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:43 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.894425 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.897515 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" event={"ID":"79a96518-940a-4490-9067-9e2f873753f7","Type":"ContainerStarted","Data":"0a65a9919e2536dd4a6a79ecb90630098f9b2fc217ff19f7af2d3deecb3b3d47"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.899071 4947 generic.go:334] "Generic (PLEG): container finished" podID="8cba929a-19da-479b-b9fb-b4cffaaba4c2" containerID="bba35843ee0d47e58c55c48ef257f5ba722c99699e81c20da941c1ba367843a8" exitCode=0 Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.899149 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" event={"ID":"8cba929a-19da-479b-b9fb-b4cffaaba4c2","Type":"ContainerDied","Data":"bba35843ee0d47e58c55c48ef257f5ba722c99699e81c20da941c1ba367843a8"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.905032 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" event={"ID":"4c613148-89dd-4904-b721-c90f6a0f89ba","Type":"ContainerStarted","Data":"ff924471f6e43e486cdf934200cc478d4e215254aa81e273cfafba9b5b6db46f"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.905101 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" event={"ID":"4c613148-89dd-4904-b721-c90f6a0f89ba","Type":"ContainerStarted","Data":"ddade357a51ab74504e3b9cf9bd47c6b00591cb9bf48a8e2213115c4f3485ab4"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.906717 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.906845 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.907996 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.407977115 +0000 UTC m=+143.640967555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.908179 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:43 crc kubenswrapper[4947]: E0125 00:11:43.909559 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.409551776 +0000 UTC m=+143.642542216 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.911384 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" event={"ID":"eaa67d1d-92d7-41aa-b72f-aee9bca370fc","Type":"ContainerStarted","Data":"87d9340038e3992641e84348234a6387bb6972a3cd340dc91fd7ff3ea70d4c8d"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.911732 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" event={"ID":"eaa67d1d-92d7-41aa-b72f-aee9bca370fc","Type":"ContainerStarted","Data":"62512cdbb5578b70db3441fdea0f27b6ca3bfc9fc525fb22e4f43dd844e2c107"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.913829 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" event={"ID":"f56c1338-08c8-47de-b24a-3aaf85e315f8","Type":"ContainerStarted","Data":"6fc89b77d4c3657ab517e644a87db8f5288c7317e1f50f648b903e80ac67446d"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.929394 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.930662 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" podStartSLOduration=122.930590338 podStartE2EDuration="2m2.930590338s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.927384444 +0000 UTC m=+143.160374884" watchObservedRunningTime="2026-01-25 00:11:43.930590338 +0000 UTC m=+143.163580778" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.937338 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" event={"ID":"caf7d2fa-5195-4e91-b838-a33c9e281dc1","Type":"ContainerStarted","Data":"4574515e3c34916600fefdbf9e9ac6d941ebfb8b0d31c4be13694860ecfa3163"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.941737 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" event={"ID":"b5f3c960-a56e-4c0e-82da-c8a39167eb8b","Type":"ContainerStarted","Data":"a1ab096e1532dd00840229892616b23918284cf8f61930b73b483876680a8e52"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.946580 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" podStartSLOduration=122.946565258 podStartE2EDuration="2m2.946565258s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.946229509 +0000 UTC m=+143.179219949" watchObservedRunningTime="2026-01-25 00:11:43.946565258 +0000 UTC m=+143.179555708" Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.948084 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" event={"ID":"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a","Type":"ContainerStarted","Data":"9b602122214d3f4e660e27fdfc9bb5ede90dc621ecc3606f8ff397cf0c2605a7"} Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.992729 4947 patch_prober.go:28] interesting pod/console-operator-58897d9998-xwjmr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 25 00:11:43 crc kubenswrapper[4947]: I0125 00:11:43.992785 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" podUID="4c613148-89dd-4904-b721-c90f6a0f89ba" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.017655 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.034667 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.534643592 +0000 UTC m=+143.767634032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.038779 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" podStartSLOduration=123.03876612 podStartE2EDuration="2m3.03876612s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:43.994082976 +0000 UTC m=+143.227073436" watchObservedRunningTime="2026-01-25 00:11:44.03876612 +0000 UTC m=+143.271756560" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.041054 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-spsw9" podStartSLOduration=123.04104586 podStartE2EDuration="2m3.04104586s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.037880876 +0000 UTC m=+143.270871316" watchObservedRunningTime="2026-01-25 00:11:44.04104586 +0000 UTC m=+143.274036300" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.096653 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-cmngb" podStartSLOduration=123.09663881 podStartE2EDuration="2m3.09663881s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.060949543 +0000 UTC m=+143.293939983" watchObservedRunningTime="2026-01-25 00:11:44.09663881 +0000 UTC m=+143.329629250" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.097400 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-2plqs" podStartSLOduration=123.0973956 podStartE2EDuration="2m3.0973956s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.09661977 +0000 UTC m=+143.329610210" watchObservedRunningTime="2026-01-25 00:11:44.0973956 +0000 UTC m=+143.330386050" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.119789 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.123857 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.623842485 +0000 UTC m=+143.856832925 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.127708 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-nkrs8" podStartSLOduration=123.127695477 podStartE2EDuration="2m3.127695477s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.123174097 +0000 UTC m=+143.356164537" watchObservedRunningTime="2026-01-25 00:11:44.127695477 +0000 UTC m=+143.360685917" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.181457 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-pruner-29488320-jf979" podStartSLOduration=123.181440748 podStartE2EDuration="2m3.181440748s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.18039172 +0000 UTC m=+143.413382160" watchObservedRunningTime="2026-01-25 00:11:44.181440748 +0000 UTC m=+143.414431188" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.208610 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" podStartSLOduration=123.208594741 podStartE2EDuration="2m3.208594741s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.206416684 +0000 UTC m=+143.439407124" watchObservedRunningTime="2026-01-25 00:11:44.208594741 +0000 UTC m=+143.441585181" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.221461 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.221638 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.721615724 +0000 UTC m=+143.954606164 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.221773 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.222056 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.722044695 +0000 UTC m=+143.955035135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.275602 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" podStartSLOduration=123.275587191 podStartE2EDuration="2m3.275587191s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.274653787 +0000 UTC m=+143.507644257" watchObservedRunningTime="2026-01-25 00:11:44.275587191 +0000 UTC m=+143.508577621" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.323900 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-qxqhz" podStartSLOduration=123.32388721 podStartE2EDuration="2m3.32388721s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:44.323495609 +0000 UTC m=+143.556486049" watchObservedRunningTime="2026-01-25 00:11:44.32388721 +0000 UTC m=+143.556877650" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.324010 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.324175 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.824163838 +0000 UTC m=+144.057154278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.324373 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.324711 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.824697401 +0000 UTC m=+144.057687841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.465906 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.466429 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.966398013 +0000 UTC m=+144.199388453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.466553 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.467143 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:44.967043891 +0000 UTC m=+144.200034341 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.569135 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.569514 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.069491872 +0000 UTC m=+144.302482312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.670247 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.670668 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.170649379 +0000 UTC m=+144.403639859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.824265 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.824570 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.324540492 +0000 UTC m=+144.557530932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.893094 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:44 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:44 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:44 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.893435 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:44 crc kubenswrapper[4947]: I0125 00:11:44.925041 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:44 crc kubenswrapper[4947]: E0125 00:11:44.925357 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.42534678 +0000 UTC m=+144.658337220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.017664 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29488320-jf979" event={"ID":"9e4f53a6-fcc3-4310-965d-9a5dda91080b","Type":"ContainerStarted","Data":"e3aa396695721797a7d088335e8a4442951eaec070e1d3920c891d8532000ff2"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.018840 4947 generic.go:334] "Generic (PLEG): container finished" podID="0fffe8f2-59b1-4215-809e-461bc8f5e386" containerID="aa9247348877ce1c25c5c915bc010ef8fba4e16be3c739dabfa7d9b3bcf5a876" exitCode=0 Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.018890 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" event={"ID":"0fffe8f2-59b1-4215-809e-461bc8f5e386","Type":"ContainerDied","Data":"aa9247348877ce1c25c5c915bc010ef8fba4e16be3c739dabfa7d9b3bcf5a876"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.070544 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.070916 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.570901884 +0000 UTC m=+144.803892324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.074103 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" event={"ID":"caf7d2fa-5195-4e91-b838-a33c9e281dc1","Type":"ContainerStarted","Data":"b61abd12836d6b78abe3f1d40f7ad73108ea8b99b095eb1ce7f6596272f36914"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.074152 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" event={"ID":"caf7d2fa-5195-4e91-b838-a33c9e281dc1","Type":"ContainerStarted","Data":"651788d7435f86de6cc26474f3b0eb31cd556bcfbdf54071473f5ceff21506a8"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.080061 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" event={"ID":"ba74a9d5-0b44-4599-ac43-d117394771b0","Type":"ContainerStarted","Data":"df791de749873dfde1b96aec802b16cb72aad8b7fc655393f14361a7b9bd2a72"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.080115 4947 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-546b8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.080167 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" podUID="ba74a9d5-0b44-4599-ac43-d117394771b0" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.081939 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerStarted","Data":"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.082288 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.083080 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" event={"ID":"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf","Type":"ContainerStarted","Data":"2f718379bb37b4968bddb3d48c6e813e390b785a9b9b976ea9541f79813743aa"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.085197 4947 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vx9fn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.085247 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.085540 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-vwtw5" event={"ID":"f56c1338-08c8-47de-b24a-3aaf85e315f8","Type":"ContainerStarted","Data":"8bb502f70bc9b5879a3ceae7bc4dc7390875c64887290b3961962c53a6ffaf4b"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.086854 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" event={"ID":"79a96518-940a-4490-9067-9e2f873753f7","Type":"ContainerStarted","Data":"ac3597a1e97c5f544f9e4fb7afbdc3988a5d15b936cf402d62a5f0bd81cae5ca"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.088140 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5s2mh" event={"ID":"5a4acfb5-2387-48ae-8c78-9d8ab4d96628","Type":"ContainerStarted","Data":"7a02a813ed0d1d3e00931180064bcf1feb864d256e13210300a1fba9418d0375"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.094773 4947 generic.go:334] "Generic (PLEG): container finished" podID="4e8662e0-1de8-4371-8836-214a0394675c" containerID="945b5d99ec8ef011f994efaa385a05ae30e88e52774ce03b7d0b40048bc3cd23" exitCode=0 Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.137715 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" event={"ID":"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8","Type":"ContainerStarted","Data":"edad46cd3c992922d1e36be124fac6367ce23a82408755ef8bce0cf121846ba4"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.137760 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qgqfk" event={"ID":"8fc6ed9e-cdc4-4908-8cd6-c75a12d1f261","Type":"ContainerStarted","Data":"68a1188fe6718ac4f8d1fe08a6ea360e60a8e01a4c150f61299b7600035b1d3a"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.137791 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" event={"ID":"90a4381e-451b-4940-932a-efba1d101c81","Type":"ContainerStarted","Data":"e2fe7f469fd36f00c9973d1856e6125ee1eb8c60edeb2ed458cb99b3a0e56f75"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.137802 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" event={"ID":"4e8662e0-1de8-4371-8836-214a0394675c","Type":"ContainerDied","Data":"945b5d99ec8ef011f994efaa385a05ae30e88e52774ce03b7d0b40048bc3cd23"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.139023 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-cxmmw" podStartSLOduration=124.138988353 podStartE2EDuration="2m4.138988353s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.138404057 +0000 UTC m=+144.371394507" watchObservedRunningTime="2026-01-25 00:11:45.138988353 +0000 UTC m=+144.371978793" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.140912 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" event={"ID":"8cba929a-19da-479b-b9fb-b4cffaaba4c2","Type":"ContainerStarted","Data":"d0ffcab6aed733717da1105cb1f5f2acc696e7577c9ce55affa1ead7d8bae0b8"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.141364 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.145394 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" event={"ID":"9d15a018-8297-4e45-8a62-afa89a267381","Type":"ContainerStarted","Data":"b8157486549c7153c6f440e775308e1931565402eac4d25b57d51cfdfab72be2"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.145933 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.146917 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" event={"ID":"0e8ad493-9466-46d8-8307-13f24463f184","Type":"ContainerStarted","Data":"82de40cdc7258ef4a95938968408c1821cc0386aba28e43807f2874eedec9e0f"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.148576 4947 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5nql4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.148607 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.234715 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-lz644" event={"ID":"c04cc1eb-ec23-4876-afd1-f123c04cdc8a","Type":"ContainerStarted","Data":"9becd27dd782903e264ac8e0df2d7d45c200203a9c88f7c2640705bb3f570178"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.236001 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.236429 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.736415752 +0000 UTC m=+144.969406192 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.256500 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s2g88" podStartSLOduration=124.256112129 podStartE2EDuration="2m4.256112129s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.252091784 +0000 UTC m=+144.485082224" watchObservedRunningTime="2026-01-25 00:11:45.256112129 +0000 UTC m=+144.489102569" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.307666 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" event={"ID":"f441771b-d1ad-442b-b344-e321cd553fbc","Type":"ContainerStarted","Data":"43a3ec4eee507eddf68b8c231e7c2937e28aab82a5f76b60b422a597277e0f37"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.337331 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" event={"ID":"4ec7126b-b0f9-4fff-a11f-76726ce4c4ff","Type":"ContainerStarted","Data":"cf923c5bc1cc75a7b7770478388250fd8d944575a71819e41d4943e522c67e56"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.341142 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.345159 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.345825 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.845809836 +0000 UTC m=+145.078800276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.369478 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" event={"ID":"17242dc8-e334-406d-ad0a-5dc9ecdf0d6a","Type":"ContainerStarted","Data":"2b986c9539b3428403b5cab5764473840d134a9afebfa773432083772ba7846f"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.370914 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.371258 4947 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bg9x9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.371305 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" podUID="17242dc8-e334-406d-ad0a-5dc9ecdf0d6a" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.32:5443/healthz\": dial tcp 10.217.0.32:5443: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.382582 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" event={"ID":"c4619229-a3d7-401d-92d8-b1195e6e08f8","Type":"ContainerStarted","Data":"8662d31844c88f4f7cb8129058f27d8b5eb8e6f9cec8ed2ed8df612ca9afb9ab"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.412356 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" event={"ID":"086f5a4b-235b-41a4-8bf6-75dd0626ba9e","Type":"ContainerStarted","Data":"aacc8bbda00c8a5c90dd93db7018c32875ccf0f035ecaa48f367ac8563c58e37"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.418254 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" event={"ID":"1e8f132b-916b-4973-9873-5919cb12251c","Type":"ContainerStarted","Data":"d8db0765e6225bb7e0e3574a4b97b7b8717c519025624235f7bbd860a4c4aaa2"} Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.420365 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.420400 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.420449 4947 patch_prober.go:28] interesting pod/console-operator-58897d9998-xwjmr container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" start-of-body= Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.420463 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" podUID="4c613148-89dd-4904-b721-c90f6a0f89ba" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.15:8443/readyz\": dial tcp 10.217.0.15:8443: connect: connection refused" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.449042 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.449383 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:45.949371397 +0000 UTC m=+145.182361837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.565272 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.565475 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.065435955 +0000 UTC m=+145.298426425 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.566013 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.570411 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.070396876 +0000 UTC m=+145.303387316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.667621 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.668014 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.16799924 +0000 UTC m=+145.400989680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.696040 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qgqfk" podStartSLOduration=9.696023176 podStartE2EDuration="9.696023176s" podCreationTimestamp="2026-01-25 00:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.407151087 +0000 UTC m=+144.640141537" watchObservedRunningTime="2026-01-25 00:11:45.696023176 +0000 UTC m=+144.929013616" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.696544 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" podStartSLOduration=124.696537559 podStartE2EDuration="2m4.696537559s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.694213708 +0000 UTC m=+144.927204148" watchObservedRunningTime="2026-01-25 00:11:45.696537559 +0000 UTC m=+144.929527999" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.769627 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.770003 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.269987988 +0000 UTC m=+145.502978428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.845667 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-glppw" podStartSLOduration=124.845631656 podStartE2EDuration="2m4.845631656s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.767777061 +0000 UTC m=+145.000767501" watchObservedRunningTime="2026-01-25 00:11:45.845631656 +0000 UTC m=+145.078622096" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.878703 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.879030 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.379012322 +0000 UTC m=+145.612002762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.953658 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:45 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:45 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:45 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.953721 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.954371 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-2prvv" podStartSLOduration=124.954360162 podStartE2EDuration="2m4.954360162s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.848978724 +0000 UTC m=+145.081969164" watchObservedRunningTime="2026-01-25 00:11:45.954360162 +0000 UTC m=+145.187350602" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.955787 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podStartSLOduration=124.955781089 podStartE2EDuration="2m4.955781089s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.954606798 +0000 UTC m=+145.187597238" watchObservedRunningTime="2026-01-25 00:11:45.955781089 +0000 UTC m=+145.188771529" Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.980481 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:45 crc kubenswrapper[4947]: E0125 00:11:45.980990 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.480975871 +0000 UTC m=+145.713966311 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:45 crc kubenswrapper[4947]: I0125 00:11:45.982011 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" podStartSLOduration=124.981981888 podStartE2EDuration="2m4.981981888s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:45.981188396 +0000 UTC m=+145.214178846" watchObservedRunningTime="2026-01-25 00:11:45.981981888 +0000 UTC m=+145.214972328" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.046346 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-4qcsn" podStartSLOduration=125.046322728 podStartE2EDuration="2m5.046322728s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.045066745 +0000 UTC m=+145.278057185" watchObservedRunningTime="2026-01-25 00:11:46.046322728 +0000 UTC m=+145.279313158" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.048081 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-lz644" podStartSLOduration=124.048075224 podStartE2EDuration="2m4.048075224s" podCreationTimestamp="2026-01-25 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.00604869 +0000 UTC m=+145.239039140" watchObservedRunningTime="2026-01-25 00:11:46.048075224 +0000 UTC m=+145.281065664" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.080233 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-n4bqp" podStartSLOduration=124.080196318 podStartE2EDuration="2m4.080196318s" podCreationTimestamp="2026-01-25 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.077786544 +0000 UTC m=+145.310776974" watchObservedRunningTime="2026-01-25 00:11:46.080196318 +0000 UTC m=+145.313186758" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.082472 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.082859 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.582843018 +0000 UTC m=+145.815833458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.184324 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.184718 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.684696903 +0000 UTC m=+145.917687333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.285909 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.286093 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.786066606 +0000 UTC m=+146.019057046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.286439 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.286720 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.786707143 +0000 UTC m=+146.019697583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.387845 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.394430 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:46.894395661 +0000 UTC m=+146.127386111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.425280 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" event={"ID":"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f","Type":"ContainerStarted","Data":"b3e5f53453dab5d4e2303e5b8afc3fd1a7832810cdd6e0b7d62a50f9d46fb231"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.427358 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" event={"ID":"79a96518-940a-4490-9067-9e2f873753f7","Type":"ContainerStarted","Data":"db38cb4a4d7183b227730f1217ae7ffc2f561b4b9d1c67d6a18c0122b7de4d0f"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.430246 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-5s2mh" event={"ID":"5a4acfb5-2387-48ae-8c78-9d8ab4d96628","Type":"ContainerStarted","Data":"cc73d351af1970d9f2f3aaa1b0265e0ae023d438f6539cd7b47be1a1e865c301"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.430267 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.431518 4947 generic.go:334] "Generic (PLEG): container finished" podID="c4619229-a3d7-401d-92d8-b1195e6e08f8" containerID="8662d31844c88f4f7cb8129058f27d8b5eb8e6f9cec8ed2ed8df612ca9afb9ab" exitCode=0 Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.431557 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" event={"ID":"c4619229-a3d7-401d-92d8-b1195e6e08f8","Type":"ContainerDied","Data":"8662d31844c88f4f7cb8129058f27d8b5eb8e6f9cec8ed2ed8df612ca9afb9ab"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.433080 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" event={"ID":"086f5a4b-235b-41a4-8bf6-75dd0626ba9e","Type":"ContainerStarted","Data":"6612a04375af55a59f2ab05a3064359f1f2ab103103b4b144af242341c24090c"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.434914 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" event={"ID":"0fffe8f2-59b1-4215-809e-461bc8f5e386","Type":"ContainerStarted","Data":"48c249a7f0d3d14dedba9e40b40a2be4bb54267016202c6a467dd71e75d7d604"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.437293 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" event={"ID":"8eb68099-5a2e-4ee2-9ab7-eb6a9aae4bdf","Type":"ContainerStarted","Data":"bf53b13a34756f4dc417b424e2b53e4ad8b307158e97faabf42d675e77081c21"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.439359 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" event={"ID":"d41f4b03-bff1-4ba1-a54b-b3e78014ecb8","Type":"ContainerStarted","Data":"28ca4e5473cf669d0f64248c6f52e4fd6ae63e28bacca4cb2c7166bafdb821dd"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.441961 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" event={"ID":"4e8662e0-1de8-4371-8836-214a0394675c","Type":"ContainerStarted","Data":"fc95af35ade002bdea64007c152d00e59af41ddbbe819254e011dcc2cf2862a3"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.441987 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" event={"ID":"4e8662e0-1de8-4371-8836-214a0394675c","Type":"ContainerStarted","Data":"efbb70e0a95a9a51cdf70bdea04cddec4723e28949bb8e6e11d15406c673058d"} Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.443452 4947 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vx9fn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.443533 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.449680 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-546b8" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.463556 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.464385 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-25dw7" podStartSLOduration=125.46436819 podStartE2EDuration="2m5.46436819s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.462955243 +0000 UTC m=+145.695945683" watchObservedRunningTime="2026-01-25 00:11:46.46436819 +0000 UTC m=+145.697358630" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.517023 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.517337 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.017326221 +0000 UTC m=+146.250316661 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.517875 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" podStartSLOduration=125.517858626 podStartE2EDuration="2m5.517858626s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.516313485 +0000 UTC m=+145.749303925" watchObservedRunningTime="2026-01-25 00:11:46.517858626 +0000 UTC m=+145.750849066" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.569508 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-tsj7s" podStartSLOduration=125.569487792 podStartE2EDuration="2m5.569487792s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.568493416 +0000 UTC m=+145.801483876" watchObservedRunningTime="2026-01-25 00:11:46.569487792 +0000 UTC m=+145.802478232" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.618408 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.619828 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.119808633 +0000 UTC m=+146.352799073 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.633174 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-tz9k4" podStartSLOduration=125.633156634 podStartE2EDuration="2m5.633156634s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.622557605 +0000 UTC m=+145.855548045" watchObservedRunningTime="2026-01-25 00:11:46.633156634 +0000 UTC m=+145.866147074" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.720805 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.721140 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.221109135 +0000 UTC m=+146.454099575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.750325 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-k7fhc" podStartSLOduration=125.750305072 podStartE2EDuration="2m5.750305072s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.715238611 +0000 UTC m=+145.948229051" watchObservedRunningTime="2026-01-25 00:11:46.750305072 +0000 UTC m=+145.983295512" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.786404 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" podStartSLOduration=124.786388569 podStartE2EDuration="2m4.786388569s" podCreationTimestamp="2026-01-25 00:09:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.78259364 +0000 UTC m=+146.015584080" watchObservedRunningTime="2026-01-25 00:11:46.786388569 +0000 UTC m=+146.019379009" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.821536 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.822682 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.322660872 +0000 UTC m=+146.555651312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.900424 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-5s2mh" podStartSLOduration=10.900407305 podStartE2EDuration="10.900407305s" podCreationTimestamp="2026-01-25 00:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:46.89946016 +0000 UTC m=+146.132450590" watchObservedRunningTime="2026-01-25 00:11:46.900407305 +0000 UTC m=+146.133397745" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.901913 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:46 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:46 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:46 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.901966 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.929440 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:46 crc kubenswrapper[4947]: E0125 00:11:46.929682 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.429669024 +0000 UTC m=+146.662659454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:46 crc kubenswrapper[4947]: I0125 00:11:46.949257 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bg9x9" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.019452 4947 csr.go:261] certificate signing request csr-6czf7 is approved, waiting to be issued Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.030380 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.030700 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.530686867 +0000 UTC m=+146.763677297 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.036259 4947 csr.go:257] certificate signing request csr-6czf7 is issued Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.075288 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.075346 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.138417 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.138852 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.638829458 +0000 UTC m=+146.871819888 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.241756 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.242041 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.7420261 +0000 UTC m=+146.975016540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.342759 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.343156 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.843143236 +0000 UTC m=+147.076133666 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.443868 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.444042 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.944020976 +0000 UTC m=+147.177011416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.444108 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.444449 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:47.944436236 +0000 UTC m=+147.177426676 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.454780 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" event={"ID":"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f","Type":"ContainerStarted","Data":"7a98a66fc45b9dc850d7ca4447dc28d96aff763e55d1f5cce1c1d7ab98ea1f65"} Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.454858 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" event={"ID":"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f","Type":"ContainerStarted","Data":"90114e1dd2eafccc01a2cae1ef4816eadf2f35dee5c4f27fe807d0c493c3c761"} Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.456698 4947 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vx9fn container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.456745 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.546051 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.547341 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.04732764 +0000 UTC m=+147.280318080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.646491 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-5dh8t" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.647873 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.648293 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.148280802 +0000 UTC m=+147.381271242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.748569 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.748780 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.248736012 +0000 UTC m=+147.481726452 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.749236 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.749566 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.249550982 +0000 UTC m=+147.482541422 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.850863 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.851169 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.35108556 +0000 UTC m=+147.584076010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.851407 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.851765 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.351750647 +0000 UTC m=+147.584741087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.910637 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:47 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:47 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:47 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.910984 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.936619 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.937511 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.939589 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.952825 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.953208 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.453175072 +0000 UTC m=+147.686165512 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954008 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzk6\" (UniqueName: \"kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954191 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954277 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954545 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.954614 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.45460254 +0000 UTC m=+147.687593180 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954284 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.954769 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.955002 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.955146 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.955263 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.962796 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.974806 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.974995 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.979173 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:11:47 crc kubenswrapper[4947]: E0125 00:11:47.979391 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4619229-a3d7-401d-92d8-b1195e6e08f8" containerName="collect-profiles" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.979411 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4619229-a3d7-401d-92d8-b1195e6e08f8" containerName="collect-profiles" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.979514 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4619229-a3d7-401d-92d8-b1195e6e08f8" containerName="collect-profiles" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.980327 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:47 crc kubenswrapper[4947]: I0125 00:11:47.982673 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.013981 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.016806 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.031259 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.032138 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.040197 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-25 00:06:47 +0000 UTC, rotation deadline is 2026-10-22 11:45:25.170161227 +0000 UTC Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.040232 4947 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6491h33m37.129931868s for next certificate rotation Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059224 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume\") pod \"c4619229-a3d7-401d-92d8-b1195e6e08f8\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059462 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059501 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume\") pod \"c4619229-a3d7-401d-92d8-b1195e6e08f8\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059548 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nm6h\" (UniqueName: \"kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h\") pod \"c4619229-a3d7-401d-92d8-b1195e6e08f8\" (UID: \"c4619229-a3d7-401d-92d8-b1195e6e08f8\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059719 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzk6\" (UniqueName: \"kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059762 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059802 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhdwv\" (UniqueName: \"kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059821 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059843 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059864 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfhzb\" (UniqueName: \"kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059884 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059899 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.059914 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.060030 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.560012739 +0000 UTC m=+147.793003179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.060227 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume" (OuterVolumeSpecName: "config-volume") pod "c4619229-a3d7-401d-92d8-b1195e6e08f8" (UID: "c4619229-a3d7-401d-92d8-b1195e6e08f8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.060696 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.062712 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.062849 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c4619229-a3d7-401d-92d8-b1195e6e08f8" (UID: "c4619229-a3d7-401d-92d8-b1195e6e08f8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.065397 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h" (OuterVolumeSpecName: "kube-api-access-5nm6h") pod "c4619229-a3d7-401d-92d8-b1195e6e08f8" (UID: "c4619229-a3d7-401d-92d8-b1195e6e08f8"). InnerVolumeSpecName "kube-api-access-5nm6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.083250 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.103036 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.122614 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.163484 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzk6\" (UniqueName: \"kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6\") pod \"certified-operators-qzj76\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.163873 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164020 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164186 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhdwv\" (UniqueName: \"kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164298 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164403 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfhzb\" (UniqueName: \"kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164504 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164606 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164789 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nm6h\" (UniqueName: \"kubernetes.io/projected/c4619229-a3d7-401d-92d8-b1195e6e08f8-kube-api-access-5nm6h\") on node \"crc\" DevicePath \"\"" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164877 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c4619229-a3d7-401d-92d8-b1195e6e08f8-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164957 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c4619229-a3d7-401d-92d8-b1195e6e08f8-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.164852 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.165178 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.165436 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.665416607 +0000 UTC m=+147.898407147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.165765 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.165953 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.217842 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfhzb\" (UniqueName: \"kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb\") pod \"certified-operators-nmrkd\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.221307 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhdwv\" (UniqueName: \"kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv\") pod \"community-operators-47m2l\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.235732 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.240860 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.245661 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.273373 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.273769 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.773743114 +0000 UTC m=+148.006733544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.274345 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.274479 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.274655 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.274740 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lszc2\" (UniqueName: \"kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.275302 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.775284614 +0000 UTC m=+148.008275054 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.285643 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.327458 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.403623 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.405138 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.405492 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.405554 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lszc2\" (UniqueName: \"kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.405600 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.405946 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.406027 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:48.906012968 +0000 UTC m=+148.139003408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.406259 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.412822 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.412982 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.435042 4947 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.457786 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lszc2\" (UniqueName: \"kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2\") pod \"community-operators-4vzx6\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.507531 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.507831 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.007820803 +0000 UTC m=+148.240811243 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.512577 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" event={"ID":"c4619229-a3d7-401d-92d8-b1195e6e08f8","Type":"ContainerDied","Data":"6f773596799deeb0c9b3c47a4dba49661e5faca82042d0d785f8b4d7e9bba2d2"} Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.512623 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f773596799deeb0c9b3c47a4dba49661e5faca82042d0d785f8b4d7e9bba2d2" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.512717 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488320-2ns8x" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.542748 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.543370 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.546734 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.547047 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.590929 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.593298 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" event={"ID":"4ca81aaf-e9c9-4f8c-9426-3e8ffa8c861f","Type":"ContainerStarted","Data":"25a5d9d4aab1073fe916b242776f45defb947f1afcfe4589e770972c6821c5e8"} Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.609105 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.609803 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.109784492 +0000 UTC m=+148.342774932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.695094 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.715349 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.715618 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.715640 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.717045 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.217029938 +0000 UTC m=+148.450020378 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.816245 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.816513 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.816540 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.816874 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.316859872 +0000 UTC m=+148.549850312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.816903 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.887642 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.899028 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.914747 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pjjgh" podStartSLOduration=12.914729283 podStartE2EDuration="12.914729283s" podCreationTimestamp="2026-01-25 00:11:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:48.750244451 +0000 UTC m=+147.983234901" watchObservedRunningTime="2026-01-25 00:11:48.914729283 +0000 UTC m=+148.147719723" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.918625 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:48 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:48 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:48 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.918671 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.926419 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:48 crc kubenswrapper[4947]: E0125 00:11:48.926717 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.426706057 +0000 UTC m=+148.659696497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.935596 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.935847 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.936881 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.979908 4947 patch_prober.go:28] interesting pod/console-f9d7485db-95tmb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 25 00:11:48 crc kubenswrapper[4947]: I0125 00:11:48.980484 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-95tmb" podUID="49c456f9-6cbf-4e3c-992a-8636357253ad" containerName="console" probeResult="failure" output="Get \"https://10.217.0.20:8443/health\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.032871 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.033683 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.533647556 +0000 UTC m=+148.766637996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.034197 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.034488 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.534478618 +0000 UTC m=+148.767469058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.135237 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.135628 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.635613955 +0000 UTC m=+148.868604395 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.165859 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-xwjmr" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.240452 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.240820 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.740806128 +0000 UTC m=+148.973796568 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.285391 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.286537 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.324384 4947 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-25T00:11:48.435062272Z","Handler":null,"Name":""} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.342661 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.343205 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.843119076 +0000 UTC m=+149.076109516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.443995 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.446277 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:49.946266286 +0000 UTC m=+149.179256726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.548688 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.548983 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-25 00:11:50.048968433 +0000 UTC m=+149.281958863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.601915 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.601945 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.616599 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"dbf1d6f17fcf81edfc092beda07ad7555a6b9c7867b661dbbb51c1bf80ff4748"} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.628303 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f2e8c896ba14b65bea1959fd101dfe34efe55d0fc583761612aee0d6fd62d57e"} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.628350 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0a61da00d3ed8c9ab02df71450413018e03b1686d4cb6c823f20ee2679791124"} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.629160 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.637309 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.652391 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f3ad3dc1fc24d0db3232a872cd25d7ea5326010f4237f0c8c301b532155a9656"} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.652437 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"a71e0ad5256a9df5103cd71cfe9af5e1092f6997457b4bb8ceef0bcf2424bd32"} Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.653825 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: E0125 00:11:49.654149 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-25 00:11:50.154110876 +0000 UTC m=+149.387101316 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mprs4" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.730137 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.741554 4947 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.741597 4947 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.756483 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.775507 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.788570 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.826057 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.827584 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.835177 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.837165 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.859413 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.882759 4947 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.882796 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.897522 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:49 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:49 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:49 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.897988 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.959759 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.960869 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.960903 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.960941 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8th5h\" (UniqueName: \"kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:49 crc kubenswrapper[4947]: I0125 00:11:49.986478 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mprs4\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.064106 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.064558 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8th5h\" (UniqueName: \"kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.064638 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.064730 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.064404 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.065135 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.127074 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.147114 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.159022 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8th5h\" (UniqueName: \"kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h\") pod \"redhat-marketplace-wwwnp\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.166301 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.166346 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.168717 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.168764 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.227718 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.228676 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.247833 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.358422 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.370049 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xj6xh\" (UniqueName: \"kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.370195 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.370227 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.471689 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xj6xh\" (UniqueName: \"kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.471730 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.471747 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.472398 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.472473 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.530194 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xj6xh\" (UniqueName: \"kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh\") pod \"redhat-marketplace-2ckt7\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.634244 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.675493 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad96bcad-395b-4844-9992-00acdf7436c2" containerID="95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643" exitCode=0 Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.675830 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerDied","Data":"95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.675855 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerStarted","Data":"6cbc84951af1c9fb04adcfedc17cf7a2205629dcc8722ddaa8c1026d70782225"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.678392 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.692461 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"f9b357dd8b8eb9fb06c3bf662c2fa7a42707d4d1f8cf3a4901b380828df7cb04"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.715412 4947 generic.go:334] "Generic (PLEG): container finished" podID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerID="ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0" exitCode=0 Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.715561 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerDied","Data":"ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.715622 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerStarted","Data":"7d6ebf3601605e6c873a327cc838407e459ee58147699177b0740d46b1d7aedf"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.721538 4947 generic.go:334] "Generic (PLEG): container finished" podID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerID="35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25" exitCode=0 Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.721611 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerDied","Data":"35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.721639 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerStarted","Data":"e4fc08944b569f65f472ef5d6a0000744c15a40d1962fcdb333c93ea9560dbba"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.738852 4947 generic.go:334] "Generic (PLEG): container finished" podID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerID="3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0" exitCode=0 Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.738956 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerDied","Data":"3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.738989 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerStarted","Data":"596449ceb20f31ed206815663af8903fec2583551204ec75b85d39be48c2895f"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.744784 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67dda4f2-8e20-4173-8789-a53030fa141f","Type":"ContainerStarted","Data":"464af017ba47fe42ba33892a92a7d488e926c49ee9650ccb5665a6435ad230c8"} Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.754086 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-j5vnk" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.794375 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.807709 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.808692 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.823529 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.825897 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.889828 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.889898 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.889962 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ndnd\" (UniqueName: \"kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.904320 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:50 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:50 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:50 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.904370 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.991584 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ndnd\" (UniqueName: \"kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.991696 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.991724 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.992158 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:50 crc kubenswrapper[4947]: I0125 00:11:50.992588 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.012532 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.013872 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ndnd\" (UniqueName: \"kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd\") pod \"redhat-operators-ltw77\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.024806 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.025842 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.033694 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:11:51 crc kubenswrapper[4947]: W0125 00:11:51.042931 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06282146_8047_4104_b189_c896e5b7f8b9.slice/crio-b331dec527a60132595158dee76520a26cd144ddc3aa45e156eaf1db6341fcb3 WatchSource:0}: Error finding container b331dec527a60132595158dee76520a26cd144ddc3aa45e156eaf1db6341fcb3: Status 404 returned error can't find the container with id b331dec527a60132595158dee76520a26cd144ddc3aa45e156eaf1db6341fcb3 Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.069370 4947 patch_prober.go:28] interesting pod/apiserver-76f77b778f-7kcc9 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]log ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]etcd ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/generic-apiserver-start-informers ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/max-in-flight-filter ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 25 00:11:51 crc kubenswrapper[4947]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 25 00:11:51 crc kubenswrapper[4947]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/project.openshift.io-projectcache ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/openshift.io-startinformers ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 25 00:11:51 crc kubenswrapper[4947]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 25 00:11:51 crc kubenswrapper[4947]: livez check failed Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.069793 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" podUID="4e8662e0-1de8-4371-8836-214a0394675c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.095694 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.095752 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fvpv\" (UniqueName: \"kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.095823 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.119835 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.120553 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.144494 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.218912 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.218961 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.218994 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fvpv\" (UniqueName: \"kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.219734 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.219739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.256018 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fvpv\" (UniqueName: \"kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv\") pod \"redhat-operators-4w46p\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.390402 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.472583 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.781984 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67dda4f2-8e20-4173-8789-a53030fa141f","Type":"ContainerStarted","Data":"3e65d42033471ac4f20585645dea9aea697ba13cd4aaebab2dce1b8c26f58aa3"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.788871 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerStarted","Data":"401542bfadee8c47bb521dc0eb21357c2ec3d46cc246c1c4b0d9b2d89d6fbbe2"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.823516 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.829663 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.829644248 podStartE2EDuration="3.829644248s" podCreationTimestamp="2026-01-25 00:11:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:51.828628402 +0000 UTC m=+151.061618842" watchObservedRunningTime="2026-01-25 00:11:51.829644248 +0000 UTC m=+151.062634688" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.834260 4947 generic.go:334] "Generic (PLEG): container finished" podID="06282146-8047-4104-b189-c896e5b7f8b9" containerID="b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c" exitCode=0 Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.834359 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerDied","Data":"b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.834400 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerStarted","Data":"b331dec527a60132595158dee76520a26cd144ddc3aa45e156eaf1db6341fcb3"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.845327 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" event={"ID":"ce1b6238-9a41-4472-accc-e4d7d6371357","Type":"ContainerStarted","Data":"51f2c364bfae060665da042ae7dc21f336f532bdfa00072378e4e19dbb303585"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.845372 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.845383 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" event={"ID":"ce1b6238-9a41-4472-accc-e4d7d6371357","Type":"ContainerStarted","Data":"8aa2ec1702299cb0f2f7ebe9da84ffc79ac7ec1919bcb49ddb3c081345236f17"} Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.883905 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" podStartSLOduration=130.883886433 podStartE2EDuration="2m10.883886433s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:51.883706228 +0000 UTC m=+151.116696668" watchObservedRunningTime="2026-01-25 00:11:51.883886433 +0000 UTC m=+151.116876873" Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.893653 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:51 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:51 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:51 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:51 crc kubenswrapper[4947]: I0125 00:11:51.893726 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.074746 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:11:52 crc kubenswrapper[4947]: W0125 00:11:52.087324 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57fceeaa_414d_4570_98fb_2b8a06a7d3bb.slice/crio-6ff6fe5b753697e94b7c8e5ddcb3516739a88d3938d849311b89961974eb03c2 WatchSource:0}: Error finding container 6ff6fe5b753697e94b7c8e5ddcb3516739a88d3938d849311b89961974eb03c2: Status 404 returned error can't find the container with id 6ff6fe5b753697e94b7c8e5ddcb3516739a88d3938d849311b89961974eb03c2 Jan 25 00:11:52 crc kubenswrapper[4947]: E0125 00:11:52.327386 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49263faf_29f4_481c_aafd_a271a29c209a.slice/crio-conmon-e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116.scope\": RecentStats: unable to find data in memory cache]" Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.899858 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:52 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:52 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:52 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.899912 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.928149 4947 generic.go:334] "Generic (PLEG): container finished" podID="67dda4f2-8e20-4173-8789-a53030fa141f" containerID="3e65d42033471ac4f20585645dea9aea697ba13cd4aaebab2dce1b8c26f58aa3" exitCode=0 Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.928252 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67dda4f2-8e20-4173-8789-a53030fa141f","Type":"ContainerDied","Data":"3e65d42033471ac4f20585645dea9aea697ba13cd4aaebab2dce1b8c26f58aa3"} Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.930683 4947 generic.go:334] "Generic (PLEG): container finished" podID="49263faf-29f4-481c-aafd-a271a29c209a" containerID="e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116" exitCode=0 Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.930751 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerDied","Data":"e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116"} Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.930789 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerStarted","Data":"23655937ab043534ca01347d9a2964b60c41f2a6eae0705e6c094b13084701de"} Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.968239 4947 generic.go:334] "Generic (PLEG): container finished" podID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerID="2f94cab6a2a710126f5b6870bb3f746028f1b033d9975bad0560d251170fc46f" exitCode=0 Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.968329 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerDied","Data":"2f94cab6a2a710126f5b6870bb3f746028f1b033d9975bad0560d251170fc46f"} Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.971915 4947 generic.go:334] "Generic (PLEG): container finished" podID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerID="5b659f6a3287d4d66a77d57b5ec03b9728a52bc6ef42a979eeaaf06156f4c4a0" exitCode=0 Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.972040 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerDied","Data":"5b659f6a3287d4d66a77d57b5ec03b9728a52bc6ef42a979eeaaf06156f4c4a0"} Jan 25 00:11:52 crc kubenswrapper[4947]: I0125 00:11:52.972087 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerStarted","Data":"6ff6fe5b753697e94b7c8e5ddcb3516739a88d3938d849311b89961974eb03c2"} Jan 25 00:11:53 crc kubenswrapper[4947]: I0125 00:11:53.890227 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:53 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:53 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:53 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:53 crc kubenswrapper[4947]: I0125 00:11:53.890610 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.288534 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.293553 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-7kcc9" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.487462 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.563552 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir\") pod \"67dda4f2-8e20-4173-8789-a53030fa141f\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.563635 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access\") pod \"67dda4f2-8e20-4173-8789-a53030fa141f\" (UID: \"67dda4f2-8e20-4173-8789-a53030fa141f\") " Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.563679 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "67dda4f2-8e20-4173-8789-a53030fa141f" (UID: "67dda4f2-8e20-4173-8789-a53030fa141f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.563933 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/67dda4f2-8e20-4173-8789-a53030fa141f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.588413 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "67dda4f2-8e20-4173-8789-a53030fa141f" (UID: "67dda4f2-8e20-4173-8789-a53030fa141f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.665291 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/67dda4f2-8e20-4173-8789-a53030fa141f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.851654 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 25 00:11:54 crc kubenswrapper[4947]: E0125 00:11:54.851931 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67dda4f2-8e20-4173-8789-a53030fa141f" containerName="pruner" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.851945 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="67dda4f2-8e20-4173-8789-a53030fa141f" containerName="pruner" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.852059 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="67dda4f2-8e20-4173-8789-a53030fa141f" containerName="pruner" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.852524 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.858663 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.859272 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.876939 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.892362 4947 patch_prober.go:28] interesting pod/router-default-5444994796-5nscb container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 25 00:11:54 crc kubenswrapper[4947]: [-]has-synced failed: reason withheld Jan 25 00:11:54 crc kubenswrapper[4947]: [+]process-running ok Jan 25 00:11:54 crc kubenswrapper[4947]: healthz check failed Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.892417 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5nscb" podUID="2244349f-df5c-4813-a0e7-418a602f57b0" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.968489 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.968584 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.998399 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"67dda4f2-8e20-4173-8789-a53030fa141f","Type":"ContainerDied","Data":"464af017ba47fe42ba33892a92a7d488e926c49ee9650ccb5665a6435ad230c8"} Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.998437 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464af017ba47fe42ba33892a92a7d488e926c49ee9650ccb5665a6435ad230c8" Jan 25 00:11:54 crc kubenswrapper[4947]: I0125 00:11:54.998455 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.084712 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.084830 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.086419 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.134667 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.171601 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.566094 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 25 00:11:55 crc kubenswrapper[4947]: I0125 00:11:55.606158 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-5s2mh" Jan 25 00:11:56 crc kubenswrapper[4947]: I0125 00:11:56.000279 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:56 crc kubenswrapper[4947]: I0125 00:11:56.007933 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5nscb" Jan 25 00:11:56 crc kubenswrapper[4947]: I0125 00:11:56.101585 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4343388d-56d1-4c5b-a26d-f6e582b7818e","Type":"ContainerStarted","Data":"7476936a1d8d55b81decfdc5bbb098f01c38f5aa4d9e5a87a5707598738da175"} Jan 25 00:11:57 crc kubenswrapper[4947]: I0125 00:11:57.143350 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4343388d-56d1-4c5b-a26d-f6e582b7818e","Type":"ContainerStarted","Data":"dc40331535091335870ab7cddd137b456fda7314c073ab1724841444245820ce"} Jan 25 00:11:57 crc kubenswrapper[4947]: I0125 00:11:57.145578 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.145567629 podStartE2EDuration="3.145567629s" podCreationTimestamp="2026-01-25 00:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:11:57.140552957 +0000 UTC m=+156.373543407" watchObservedRunningTime="2026-01-25 00:11:57.145567629 +0000 UTC m=+156.378558069" Jan 25 00:11:58 crc kubenswrapper[4947]: I0125 00:11:58.163850 4947 generic.go:334] "Generic (PLEG): container finished" podID="4343388d-56d1-4c5b-a26d-f6e582b7818e" containerID="dc40331535091335870ab7cddd137b456fda7314c073ab1724841444245820ce" exitCode=0 Jan 25 00:11:58 crc kubenswrapper[4947]: I0125 00:11:58.163893 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4343388d-56d1-4c5b-a26d-f6e582b7818e","Type":"ContainerDied","Data":"dc40331535091335870ab7cddd137b456fda7314c073ab1724841444245820ce"} Jan 25 00:11:58 crc kubenswrapper[4947]: I0125 00:11:58.935862 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:11:58 crc kubenswrapper[4947]: I0125 00:11:58.941251 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-95tmb" Jan 25 00:12:00 crc kubenswrapper[4947]: I0125 00:12:00.165577 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:12:00 crc kubenswrapper[4947]: I0125 00:12:00.166218 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:12:00 crc kubenswrapper[4947]: I0125 00:12:00.169374 4947 patch_prober.go:28] interesting pod/downloads-7954f5f757-5zvdg container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Jan 25 00:12:00 crc kubenswrapper[4947]: I0125 00:12:00.169454 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-5zvdg" podUID="0e97ae5e-35ab-41e9-aa03-ad060bbbd676" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Jan 25 00:12:04 crc kubenswrapper[4947]: I0125 00:12:04.810337 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:12:04 crc kubenswrapper[4947]: I0125 00:12:04.816603 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9a64fbf1-68fc-4379-9bb7-009c4f2cc812-metrics-certs\") pod \"network-metrics-daemon-hj7kb\" (UID: \"9a64fbf1-68fc-4379-9bb7-009c4f2cc812\") " pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:12:04 crc kubenswrapper[4947]: I0125 00:12:04.912295 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hj7kb" Jan 25 00:12:05 crc kubenswrapper[4947]: I0125 00:12:05.617471 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:12:05 crc kubenswrapper[4947]: I0125 00:12:05.617711 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" containerID="cri-o://b8157486549c7153c6f440e775308e1931565402eac4d25b57d51cfdfab72be2" gracePeriod=30 Jan 25 00:12:05 crc kubenswrapper[4947]: I0125 00:12:05.630724 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:12:05 crc kubenswrapper[4947]: I0125 00:12:05.631016 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" containerID="cri-o://7403745f150d63c1345cf7e91fe3a3905bd0f9a92d392d4fa7f206d8e146d177" gracePeriod=30 Jan 25 00:12:06 crc kubenswrapper[4947]: I0125 00:12:06.264008 4947 generic.go:334] "Generic (PLEG): container finished" podID="37b7a00e-4def-4d1e-8333-94d15174223b" containerID="7403745f150d63c1345cf7e91fe3a3905bd0f9a92d392d4fa7f206d8e146d177" exitCode=0 Jan 25 00:12:06 crc kubenswrapper[4947]: I0125 00:12:06.264139 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" event={"ID":"37b7a00e-4def-4d1e-8333-94d15174223b","Type":"ContainerDied","Data":"7403745f150d63c1345cf7e91fe3a3905bd0f9a92d392d4fa7f206d8e146d177"} Jan 25 00:12:06 crc kubenswrapper[4947]: I0125 00:12:06.267373 4947 generic.go:334] "Generic (PLEG): container finished" podID="9d15a018-8297-4e45-8a62-afa89a267381" containerID="b8157486549c7153c6f440e775308e1931565402eac4d25b57d51cfdfab72be2" exitCode=0 Jan 25 00:12:06 crc kubenswrapper[4947]: I0125 00:12:06.267414 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" event={"ID":"9d15a018-8297-4e45-8a62-afa89a267381","Type":"ContainerDied","Data":"b8157486549c7153c6f440e775308e1931565402eac4d25b57d51cfdfab72be2"} Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.827703 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.853310 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access\") pod \"4343388d-56d1-4c5b-a26d-f6e582b7818e\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.853402 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir\") pod \"4343388d-56d1-4c5b-a26d-f6e582b7818e\" (UID: \"4343388d-56d1-4c5b-a26d-f6e582b7818e\") " Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.853556 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4343388d-56d1-4c5b-a26d-f6e582b7818e" (UID: "4343388d-56d1-4c5b-a26d-f6e582b7818e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.853842 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4343388d-56d1-4c5b-a26d-f6e582b7818e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.859301 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4343388d-56d1-4c5b-a26d-f6e582b7818e" (UID: "4343388d-56d1-4c5b-a26d-f6e582b7818e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:12:07 crc kubenswrapper[4947]: I0125 00:12:07.954535 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4343388d-56d1-4c5b-a26d-f6e582b7818e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:08 crc kubenswrapper[4947]: I0125 00:12:08.284708 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4343388d-56d1-4c5b-a26d-f6e582b7818e","Type":"ContainerDied","Data":"7476936a1d8d55b81decfdc5bbb098f01c38f5aa4d9e5a87a5707598738da175"} Jan 25 00:12:08 crc kubenswrapper[4947]: I0125 00:12:08.284763 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7476936a1d8d55b81decfdc5bbb098f01c38f5aa4d9e5a87a5707598738da175" Jan 25 00:12:08 crc kubenswrapper[4947]: I0125 00:12:08.284871 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 25 00:12:09 crc kubenswrapper[4947]: I0125 00:12:09.926598 4947 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vw66z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:12:09 crc kubenswrapper[4947]: I0125 00:12:09.927057 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:12:10 crc kubenswrapper[4947]: I0125 00:12:10.133090 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:12:10 crc kubenswrapper[4947]: I0125 00:12:10.180672 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-5zvdg" Jan 25 00:12:10 crc kubenswrapper[4947]: I0125 00:12:10.276295 4947 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5nql4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:12:10 crc kubenswrapper[4947]: I0125 00:12:10.276377 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:12:17 crc kubenswrapper[4947]: I0125 00:12:17.073038 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:12:17 crc kubenswrapper[4947]: I0125 00:12:17.073523 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:12:19 crc kubenswrapper[4947]: I0125 00:12:19.650463 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-tbnnc" Jan 25 00:12:19 crc kubenswrapper[4947]: I0125 00:12:19.927633 4947 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vw66z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:12:19 crc kubenswrapper[4947]: I0125 00:12:19.928237 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:12:20 crc kubenswrapper[4947]: I0125 00:12:20.277711 4947 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5nql4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:12:20 crc kubenswrapper[4947]: I0125 00:12:20.277786 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:12:24 crc kubenswrapper[4947]: I0125 00:12:24.387253 4947 generic.go:334] "Generic (PLEG): container finished" podID="9e4f53a6-fcc3-4310-965d-9a5dda91080b" containerID="e3aa396695721797a7d088335e8a4442951eaec070e1d3920c891d8532000ff2" exitCode=0 Jan 25 00:12:24 crc kubenswrapper[4947]: I0125 00:12:24.387453 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29488320-jf979" event={"ID":"9e4f53a6-fcc3-4310-965d-9a5dda91080b","Type":"ContainerDied","Data":"e3aa396695721797a7d088335e8a4442951eaec070e1d3920c891d8532000ff2"} Jan 25 00:12:28 crc kubenswrapper[4947]: I0125 00:12:28.248429 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 25 00:12:28 crc kubenswrapper[4947]: E0125 00:12:28.847875 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 25 00:12:28 crc kubenswrapper[4947]: E0125 00:12:28.848062 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qzj76_openshift-marketplace(900aeb01-050c-45b8-936c-e5f8d73ebeb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:28 crc kubenswrapper[4947]: E0125 00:12:28.849311 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qzj76" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.883450 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qzj76" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.900987 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92\": context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.901311 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fvpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4w46p_openshift-marketplace(57fceeaa-414d-4570-98fb-2b8a06a7d3bb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92: Get \"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92\": context canceled" logger="UnhandledError" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.902461 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: reading blob sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92: Get \\\"https://registry.redhat.io/v2/redhat/redhat-operator-index/blobs/sha256:eb2f8e43e60769562d45eeebcd027f8298b39dfa92aed456412d36a28c32bc92\\\": context canceled\"" pod="openshift-marketplace/redhat-operators-4w46p" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.906234 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage381955160/2\": happened during read: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.906455 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xj6xh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2ckt7_openshift-marketplace(47cb5005-6286-4d5c-b654-65009ac6d3d9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \"/var/tmp/container_images_storage381955160/2\": happened during read: context canceled" logger="UnhandledError" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.907796 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: writing blob: storing blob to file \\\"/var/tmp/container_images_storage381955160/2\\\": happened during read: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2ckt7" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.927948 4947 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-vw66z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.928020 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.960543 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.967770 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.984683 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.984901 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfhzb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-nmrkd_openshift-marketplace(8631ec11-9ab2-4799-b57c-0a346ec69767): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.993789 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-nmrkd" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994280 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.994532 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994556 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.994573 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994579 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: E0125 00:12:29.994589 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4343388d-56d1-4c5b-a26d-f6e582b7818e" containerName="pruner" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994596 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4343388d-56d1-4c5b-a26d-f6e582b7818e" containerName="pruner" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994708 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" containerName="route-controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994719 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4343388d-56d1-4c5b-a26d-f6e582b7818e" containerName="pruner" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.994728 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" Jan 25 00:12:29 crc kubenswrapper[4947]: I0125 00:12:29.995190 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.002937 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config\") pod \"37b7a00e-4def-4d1e-8333-94d15174223b\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.003004 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x956c\" (UniqueName: \"kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c\") pod \"37b7a00e-4def-4d1e-8333-94d15174223b\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.003026 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca\") pod \"37b7a00e-4def-4d1e-8333-94d15174223b\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.003090 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert\") pod \"37b7a00e-4def-4d1e-8333-94d15174223b\" (UID: \"37b7a00e-4def-4d1e-8333-94d15174223b\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.003728 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config" (OuterVolumeSpecName: "config") pod "37b7a00e-4def-4d1e-8333-94d15174223b" (UID: "37b7a00e-4def-4d1e-8333-94d15174223b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.004229 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.004470 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca" (OuterVolumeSpecName: "client-ca") pod "37b7a00e-4def-4d1e-8333-94d15174223b" (UID: "37b7a00e-4def-4d1e-8333-94d15174223b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.009412 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "37b7a00e-4def-4d1e-8333-94d15174223b" (UID: "37b7a00e-4def-4d1e-8333-94d15174223b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.023886 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c" (OuterVolumeSpecName: "kube-api-access-x956c") pod "37b7a00e-4def-4d1e-8333-94d15174223b" (UID: "37b7a00e-4def-4d1e-8333-94d15174223b"). InnerVolumeSpecName "kube-api-access-x956c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.104764 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert\") pod \"9d15a018-8297-4e45-8a62-afa89a267381\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.104863 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config\") pod \"9d15a018-8297-4e45-8a62-afa89a267381\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105031 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles\") pod \"9d15a018-8297-4e45-8a62-afa89a267381\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105053 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsx9g\" (UniqueName: \"kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g\") pod \"9d15a018-8297-4e45-8a62-afa89a267381\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105084 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca\") pod \"9d15a018-8297-4e45-8a62-afa89a267381\" (UID: \"9d15a018-8297-4e45-8a62-afa89a267381\") " Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105402 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105454 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105528 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nrzk\" (UniqueName: \"kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105602 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105685 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105701 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x956c\" (UniqueName: \"kubernetes.io/projected/37b7a00e-4def-4d1e-8333-94d15174223b-kube-api-access-x956c\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105718 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37b7a00e-4def-4d1e-8333-94d15174223b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.105735 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37b7a00e-4def-4d1e-8333-94d15174223b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.106352 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca" (OuterVolumeSpecName: "client-ca") pod "9d15a018-8297-4e45-8a62-afa89a267381" (UID: "9d15a018-8297-4e45-8a62-afa89a267381"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.106365 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9d15a018-8297-4e45-8a62-afa89a267381" (UID: "9d15a018-8297-4e45-8a62-afa89a267381"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.106596 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config" (OuterVolumeSpecName: "config") pod "9d15a018-8297-4e45-8a62-afa89a267381" (UID: "9d15a018-8297-4e45-8a62-afa89a267381"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.108435 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d15a018-8297-4e45-8a62-afa89a267381" (UID: "9d15a018-8297-4e45-8a62-afa89a267381"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.108498 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g" (OuterVolumeSpecName: "kube-api-access-zsx9g") pod "9d15a018-8297-4e45-8a62-afa89a267381" (UID: "9d15a018-8297-4e45-8a62-afa89a267381"). InnerVolumeSpecName "kube-api-access-zsx9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.206727 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.206820 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.206857 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.206937 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nrzk\" (UniqueName: \"kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.207003 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsx9g\" (UniqueName: \"kubernetes.io/projected/9d15a018-8297-4e45-8a62-afa89a267381-kube-api-access-zsx9g\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.207024 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.207042 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.207060 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d15a018-8297-4e45-8a62-afa89a267381-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.207077 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d15a018-8297-4e45-8a62-afa89a267381-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.208096 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.208800 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.213711 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.227086 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nrzk\" (UniqueName: \"kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk\") pod \"route-controller-manager-5554874b69-jz72t\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.274655 4947 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-5nql4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded" start-of-body= Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.274721 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" podUID="9d15a018-8297-4e45-8a62-afa89a267381" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": context deadline exceeded" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.345857 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.429962 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" event={"ID":"37b7a00e-4def-4d1e-8333-94d15174223b","Type":"ContainerDied","Data":"f071273158cef43a1913f83b87e712c39d955ed385651512fb2fd76cd5e1e89d"} Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.429997 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.430091 4947 scope.go:117] "RemoveContainer" containerID="7403745f150d63c1345cf7e91fe3a3905bd0f9a92d392d4fa7f206d8e146d177" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.436870 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.437031 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-5nql4" event={"ID":"9d15a018-8297-4e45-8a62-afa89a267381","Type":"ContainerDied","Data":"f8461de37c95d2f68aceb775dc7fe3de76c76f12ffef68dfcf5d69ee41e8f1e4"} Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.453559 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.455539 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.461507 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.461998 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.473099 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.515397 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.516003 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.556769 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.564029 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-5nql4"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.587813 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.590481 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-vw66z"] Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.617763 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.617851 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.617952 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.634608 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:30 crc kubenswrapper[4947]: I0125 00:12:30.802981 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:31 crc kubenswrapper[4947]: I0125 00:12:31.106892 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b7a00e-4def-4d1e-8333-94d15174223b" path="/var/lib/kubelet/pods/37b7a00e-4def-4d1e-8333-94d15174223b/volumes" Jan 25 00:12:31 crc kubenswrapper[4947]: I0125 00:12:31.108385 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d15a018-8297-4e45-8a62-afa89a267381" path="/var/lib/kubelet/pods/9d15a018-8297-4e45-8a62-afa89a267381/volumes" Jan 25 00:12:31 crc kubenswrapper[4947]: E0125 00:12:31.563186 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 25 00:12:31 crc kubenswrapper[4947]: E0125 00:12:31.563362 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7ndnd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ltw77_openshift-marketplace(49263faf-29f4-481c-aafd-a271a29c209a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:31 crc kubenswrapper[4947]: E0125 00:12:31.564543 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ltw77" podUID="49263faf-29f4-481c-aafd-a271a29c209a" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.168925 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.169809 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.173142 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.175523 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.176391 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.176218 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.177843 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.180807 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.181212 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.185578 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.238902 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6fvp\" (UniqueName: \"kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.239051 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.239097 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.239187 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.239243 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.340455 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6fvp\" (UniqueName: \"kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.340513 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.340532 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.340556 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.341648 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.341756 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.341844 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.341904 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.345110 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.355337 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6fvp\" (UniqueName: \"kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp\") pod \"controller-manager-5687556c4c-8vd78\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:32 crc kubenswrapper[4947]: I0125 00:12:32.507706 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.845340 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.846805 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.855318 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 25 00:12:35 crc kubenswrapper[4947]: E0125 00:12:35.991191 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ltw77" podUID="49263faf-29f4-481c-aafd-a271a29c209a" Jan 25 00:12:35 crc kubenswrapper[4947]: E0125 00:12:35.991447 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-nmrkd" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" Jan 25 00:12:35 crc kubenswrapper[4947]: E0125 00:12:35.991518 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4w46p" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" Jan 25 00:12:35 crc kubenswrapper[4947]: E0125 00:12:35.991633 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2ckt7" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.996250 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.996344 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:35 crc kubenswrapper[4947]: I0125 00:12:35.996449 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: E0125 00:12:36.084935 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 25 00:12:36 crc kubenswrapper[4947]: E0125 00:12:36.085638 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lszc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4vzx6_openshift-marketplace(4fbe2fc7-f0a5-439c-988c-d034d3da6add): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:36 crc kubenswrapper[4947]: E0125 00:12:36.086875 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4vzx6" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.097852 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.097926 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.098001 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.098506 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.098569 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.121974 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access\") pod \"installer-9-crc\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:36 crc kubenswrapper[4947]: I0125 00:12:36.180549 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.450106 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4vzx6" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.479338 4947 scope.go:117] "RemoveContainer" containerID="b8157486549c7153c6f440e775308e1931565402eac4d25b57d51cfdfab72be2" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.480622 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-pruner-29488320-jf979" event={"ID":"9e4f53a6-fcc3-4310-965d-9a5dda91080b","Type":"ContainerDied","Data":"3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc"} Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.480661 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ad4af98bee02e97732bce66ef98812c3ca3ae679230f9721b2db8e405f6defc" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.505256 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.514871 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.515033 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8th5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-wwwnp_openshift-marketplace(06282146-8047-4104-b189-c896e5b7f8b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.518844 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-wwwnp" podUID="06282146-8047-4104-b189-c896e5b7f8b9" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.562058 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.562545 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fhdwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-47m2l_openshift-marketplace(ad96bcad-395b-4844-9992-00acdf7436c2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:12:37 crc kubenswrapper[4947]: E0125 00:12:37.563888 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-47m2l" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.635883 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca\") pod \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.635931 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6msz\" (UniqueName: \"kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz\") pod \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\" (UID: \"9e4f53a6-fcc3-4310-965d-9a5dda91080b\") " Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.637174 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca" (OuterVolumeSpecName: "serviceca") pod "9e4f53a6-fcc3-4310-965d-9a5dda91080b" (UID: "9e4f53a6-fcc3-4310-965d-9a5dda91080b"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.650031 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz" (OuterVolumeSpecName: "kube-api-access-k6msz") pod "9e4f53a6-fcc3-4310-965d-9a5dda91080b" (UID: "9e4f53a6-fcc3-4310-965d-9a5dda91080b"). InnerVolumeSpecName "kube-api-access-k6msz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.738701 4947 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9e4f53a6-fcc3-4310-965d-9a5dda91080b-serviceca\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.738737 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6msz\" (UniqueName: \"kubernetes.io/projected/9e4f53a6-fcc3-4310-965d-9a5dda91080b-kube-api-access-k6msz\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:37 crc kubenswrapper[4947]: I0125 00:12:37.896259 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hj7kb"] Jan 25 00:12:37 crc kubenswrapper[4947]: W0125 00:12:37.900905 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a64fbf1_68fc_4379_9bb7_009c4f2cc812.slice/crio-e1697cdf50157c54be0fee186f9fa041d65aada475b62821b2409ec373f611b6 WatchSource:0}: Error finding container e1697cdf50157c54be0fee186f9fa041d65aada475b62821b2409ec373f611b6: Status 404 returned error can't find the container with id e1697cdf50157c54be0fee186f9fa041d65aada475b62821b2409ec373f611b6 Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.004570 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.010961 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:12:38 crc kubenswrapper[4947]: W0125 00:12:38.015408 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod94d05abe_f768_43d7_abf4_0a7a4e36c37e.slice/crio-0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb WatchSource:0}: Error finding container 0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb: Status 404 returned error can't find the container with id 0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb Jan 25 00:12:38 crc kubenswrapper[4947]: W0125 00:12:38.015726 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod494402be_6a25_4b8d_a515_de9eba8f1d31.slice/crio-4c061a35ba34d1fb75aea276081e1c5742b11fc541423a49284514c05ed48d3b WatchSource:0}: Error finding container 4c061a35ba34d1fb75aea276081e1c5742b11fc541423a49284514c05ed48d3b: Status 404 returned error can't find the container with id 4c061a35ba34d1fb75aea276081e1c5742b11fc541423a49284514c05ed48d3b Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.059702 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.067250 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:12:38 crc kubenswrapper[4947]: W0125 00:12:38.068295 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3305e0ba_7064_415c_bbaa_bdc630d95e40.slice/crio-c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c WatchSource:0}: Error finding container c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c: Status 404 returned error can't find the container with id c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c Jan 25 00:12:38 crc kubenswrapper[4947]: W0125 00:12:38.084478 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1842ab3_9eb3_4aa3_b77f_ee74e120fe47.slice/crio-7fa469b1f93c6d5d71121e93f8cc7b379adfec6c3fbf576d48d60d1ba1c8315e WatchSource:0}: Error finding container 7fa469b1f93c6d5d71121e93f8cc7b379adfec6c3fbf576d48d60d1ba1c8315e: Status 404 returned error can't find the container with id 7fa469b1f93c6d5d71121e93f8cc7b379adfec6c3fbf576d48d60d1ba1c8315e Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.487327 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" event={"ID":"494402be-6a25-4b8d-a515-de9eba8f1d31","Type":"ContainerStarted","Data":"1c028832216129582df30e9ae1e01392e885f0edd17b2831b6b48bd47fc1c5c0"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.487723 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.487734 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" event={"ID":"494402be-6a25-4b8d-a515-de9eba8f1d31","Type":"ContainerStarted","Data":"4c061a35ba34d1fb75aea276081e1c5742b11fc541423a49284514c05ed48d3b"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.491613 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" event={"ID":"9a64fbf1-68fc-4379-9bb7-009c4f2cc812","Type":"ContainerStarted","Data":"6cf2f71bea8dc343fff30434b9c7ab220a8da9ed448baca807433dff49bfd524"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.491646 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" event={"ID":"9a64fbf1-68fc-4379-9bb7-009c4f2cc812","Type":"ContainerStarted","Data":"a4e20ea7aa6163a8d9dd3f1dd01209a3a318ee65a41e958e76b459a269b88398"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.491658 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hj7kb" event={"ID":"9a64fbf1-68fc-4379-9bb7-009c4f2cc812","Type":"ContainerStarted","Data":"e1697cdf50157c54be0fee186f9fa041d65aada475b62821b2409ec373f611b6"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.493053 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.493407 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" event={"ID":"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47","Type":"ContainerStarted","Data":"6c1d7cc945b820b218fa75e06214182e9d3f500a0874845bdee9db322e9cef77"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.493425 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" event={"ID":"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47","Type":"ContainerStarted","Data":"7fa469b1f93c6d5d71121e93f8cc7b379adfec6c3fbf576d48d60d1ba1c8315e"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.493931 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.495696 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3305e0ba-7064-415c-bbaa-bdc630d95e40","Type":"ContainerStarted","Data":"34c655a70626cb0470c8341f4426a959f7be73c9bd302d5c7f42a0999b60f186"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.495721 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3305e0ba-7064-415c-bbaa-bdc630d95e40","Type":"ContainerStarted","Data":"c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.499506 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94d05abe-f768-43d7-abf4-0a7a4e36c37e","Type":"ContainerStarted","Data":"3be61bec2372426fb30ddc693384b0919273401e3137f16957f1612fd7428fda"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.499532 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94d05abe-f768-43d7-abf4-0a7a4e36c37e","Type":"ContainerStarted","Data":"0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb"} Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.499579 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-pruner-29488320-jf979" Jan 25 00:12:38 crc kubenswrapper[4947]: E0125 00:12:38.503161 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-47m2l" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" Jan 25 00:12:38 crc kubenswrapper[4947]: E0125 00:12:38.504486 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-wwwnp" podUID="06282146-8047-4104-b189-c896e5b7f8b9" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.507206 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" podStartSLOduration=13.50718859 podStartE2EDuration="13.50718859s" podCreationTimestamp="2026-01-25 00:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:12:38.503207176 +0000 UTC m=+197.736197606" watchObservedRunningTime="2026-01-25 00:12:38.50718859 +0000 UTC m=+197.740179030" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.552692 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" podStartSLOduration=13.552671626 podStartE2EDuration="13.552671626s" podCreationTimestamp="2026-01-25 00:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:12:38.550344754 +0000 UTC m=+197.783335204" watchObservedRunningTime="2026-01-25 00:12:38.552671626 +0000 UTC m=+197.785662066" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.573702 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.5736870080000003 podStartE2EDuration="3.573687008s" podCreationTimestamp="2026-01-25 00:12:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:12:38.57077025 +0000 UTC m=+197.803760690" watchObservedRunningTime="2026-01-25 00:12:38.573687008 +0000 UTC m=+197.806677448" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.610708 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hj7kb" podStartSLOduration=177.61069034 podStartE2EDuration="2m57.61069034s" podCreationTimestamp="2026-01-25 00:09:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:12:38.587190002 +0000 UTC m=+197.820180442" watchObservedRunningTime="2026-01-25 00:12:38.61069034 +0000 UTC m=+197.843680780" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.649672 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=8.649650823 podStartE2EDuration="8.649650823s" podCreationTimestamp="2026-01-25 00:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:12:38.642472535 +0000 UTC m=+197.875462975" watchObservedRunningTime="2026-01-25 00:12:38.649650823 +0000 UTC m=+197.882641263" Jan 25 00:12:38 crc kubenswrapper[4947]: I0125 00:12:38.882223 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:12:39 crc kubenswrapper[4947]: I0125 00:12:39.504845 4947 generic.go:334] "Generic (PLEG): container finished" podID="94d05abe-f768-43d7-abf4-0a7a4e36c37e" containerID="3be61bec2372426fb30ddc693384b0919273401e3137f16957f1612fd7428fda" exitCode=0 Jan 25 00:12:39 crc kubenswrapper[4947]: I0125 00:12:39.504954 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94d05abe-f768-43d7-abf4-0a7a4e36c37e","Type":"ContainerDied","Data":"3be61bec2372426fb30ddc693384b0919273401e3137f16957f1612fd7428fda"} Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.843383 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.980537 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir\") pod \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.980605 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "94d05abe-f768-43d7-abf4-0a7a4e36c37e" (UID: "94d05abe-f768-43d7-abf4-0a7a4e36c37e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.980676 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access\") pod \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\" (UID: \"94d05abe-f768-43d7-abf4-0a7a4e36c37e\") " Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.980949 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:40 crc kubenswrapper[4947]: I0125 00:12:40.990474 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "94d05abe-f768-43d7-abf4-0a7a4e36c37e" (UID: "94d05abe-f768-43d7-abf4-0a7a4e36c37e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:12:41 crc kubenswrapper[4947]: I0125 00:12:41.082058 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94d05abe-f768-43d7-abf4-0a7a4e36c37e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:12:41 crc kubenswrapper[4947]: I0125 00:12:41.543824 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"94d05abe-f768-43d7-abf4-0a7a4e36c37e","Type":"ContainerDied","Data":"0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb"} Jan 25 00:12:41 crc kubenswrapper[4947]: I0125 00:12:41.544429 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 25 00:12:41 crc kubenswrapper[4947]: I0125 00:12:41.546173 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d6738675ab309ecb5f97f7f0380be43c80274292c8c75612a5f1dcb0f6499cb" Jan 25 00:12:42 crc kubenswrapper[4947]: I0125 00:12:42.550631 4947 generic.go:334] "Generic (PLEG): container finished" podID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerID="43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b" exitCode=0 Jan 25 00:12:42 crc kubenswrapper[4947]: I0125 00:12:42.550739 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerDied","Data":"43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b"} Jan 25 00:12:43 crc kubenswrapper[4947]: I0125 00:12:43.558763 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerStarted","Data":"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1"} Jan 25 00:12:43 crc kubenswrapper[4947]: I0125 00:12:43.577345 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qzj76" podStartSLOduration=4.243234018 podStartE2EDuration="56.577329434s" podCreationTimestamp="2026-01-25 00:11:47 +0000 UTC" firstStartedPulling="2026-01-25 00:11:50.742758844 +0000 UTC m=+149.975749285" lastFinishedPulling="2026-01-25 00:12:43.076854261 +0000 UTC m=+202.309844701" observedRunningTime="2026-01-25 00:12:43.576413669 +0000 UTC m=+202.809404149" watchObservedRunningTime="2026-01-25 00:12:43.577329434 +0000 UTC m=+202.810319874" Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.072571 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.073049 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.073120 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.073866 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.074033 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc" gracePeriod=600 Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.581703 4947 generic.go:334] "Generic (PLEG): container finished" podID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerID="f0e8d23d69c130c02800100eadb68b2223ac406b1ec604b6d679944a469596fc" exitCode=0 Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.581769 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerDied","Data":"f0e8d23d69c130c02800100eadb68b2223ac406b1ec604b6d679944a469596fc"} Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.583302 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc" exitCode=0 Jan 25 00:12:47 crc kubenswrapper[4947]: I0125 00:12:47.583344 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc"} Jan 25 00:12:48 crc kubenswrapper[4947]: I0125 00:12:48.413331 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:12:48 crc kubenswrapper[4947]: I0125 00:12:48.413707 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:12:48 crc kubenswrapper[4947]: I0125 00:12:48.571337 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:12:48 crc kubenswrapper[4947]: I0125 00:12:48.605091 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c"} Jan 25 00:12:48 crc kubenswrapper[4947]: I0125 00:12:48.652502 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:12:49 crc kubenswrapper[4947]: I0125 00:12:49.612806 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerStarted","Data":"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16"} Jan 25 00:12:49 crc kubenswrapper[4947]: I0125 00:12:49.615380 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerStarted","Data":"317126b2d6e4f76d49bc8f58d4a6611c35dbaae31512dbd0e9d9b571103fb945"} Jan 25 00:12:49 crc kubenswrapper[4947]: I0125 00:12:49.650423 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2ckt7" podStartSLOduration=4.049122811 podStartE2EDuration="59.650403988s" podCreationTimestamp="2026-01-25 00:11:50 +0000 UTC" firstStartedPulling="2026-01-25 00:11:52.969887203 +0000 UTC m=+152.202877643" lastFinishedPulling="2026-01-25 00:12:48.57116838 +0000 UTC m=+207.804158820" observedRunningTime="2026-01-25 00:12:49.649330979 +0000 UTC m=+208.882321439" watchObservedRunningTime="2026-01-25 00:12:49.650403988 +0000 UTC m=+208.883394428" Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.624285 4947 generic.go:334] "Generic (PLEG): container finished" podID="49263faf-29f4-481c-aafd-a271a29c209a" containerID="4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16" exitCode=0 Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.624525 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerDied","Data":"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16"} Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.632929 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerDied","Data":"7d1970bbd2b42fe7877661c52dbc7fbf441aa3e65a5ef8dfe46f639caa2e9c08"} Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.633165 4947 generic.go:334] "Generic (PLEG): container finished" podID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerID="7d1970bbd2b42fe7877661c52dbc7fbf441aa3e65a5ef8dfe46f639caa2e9c08" exitCode=0 Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.635344 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.635407 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.636000 4947 generic.go:334] "Generic (PLEG): container finished" podID="06282146-8047-4104-b189-c896e5b7f8b9" containerID="9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e" exitCode=0 Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.636663 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerDied","Data":"9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e"} Jan 25 00:12:50 crc kubenswrapper[4947]: I0125 00:12:50.698036 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:12:52 crc kubenswrapper[4947]: I0125 00:12:52.649247 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerStarted","Data":"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b"} Jan 25 00:12:52 crc kubenswrapper[4947]: I0125 00:12:52.667371 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wwwnp" podStartSLOduration=3.668193039 podStartE2EDuration="1m3.667351589s" podCreationTimestamp="2026-01-25 00:11:49 +0000 UTC" firstStartedPulling="2026-01-25 00:11:51.840334099 +0000 UTC m=+151.073324539" lastFinishedPulling="2026-01-25 00:12:51.839492659 +0000 UTC m=+211.072483089" observedRunningTime="2026-01-25 00:12:52.665142709 +0000 UTC m=+211.898133159" watchObservedRunningTime="2026-01-25 00:12:52.667351589 +0000 UTC m=+211.900342029" Jan 25 00:12:53 crc kubenswrapper[4947]: I0125 00:12:53.656912 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerStarted","Data":"cda1828998251daaa0955fcd8ff8f0139c55af4883ef564ee55bdf29defa9273"} Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.664720 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerStarted","Data":"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985"} Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.667232 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerStarted","Data":"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3"} Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.669413 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerStarted","Data":"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7"} Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.671297 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerStarted","Data":"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147"} Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.754835 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4w46p" podStartSLOduration=5.414349168 podStartE2EDuration="1m4.754814415s" podCreationTimestamp="2026-01-25 00:11:50 +0000 UTC" firstStartedPulling="2026-01-25 00:11:52.973012525 +0000 UTC m=+152.206002965" lastFinishedPulling="2026-01-25 00:12:52.313477782 +0000 UTC m=+211.546468212" observedRunningTime="2026-01-25 00:12:54.751484484 +0000 UTC m=+213.984474924" watchObservedRunningTime="2026-01-25 00:12:54.754814415 +0000 UTC m=+213.987804855" Jan 25 00:12:54 crc kubenswrapper[4947]: I0125 00:12:54.779810 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ltw77" podStartSLOduration=4.993785403 podStartE2EDuration="1m4.779789705s" podCreationTimestamp="2026-01-25 00:11:50 +0000 UTC" firstStartedPulling="2026-01-25 00:11:52.933040104 +0000 UTC m=+152.166030544" lastFinishedPulling="2026-01-25 00:12:52.719044406 +0000 UTC m=+211.952034846" observedRunningTime="2026-01-25 00:12:54.775963819 +0000 UTC m=+214.008954259" watchObservedRunningTime="2026-01-25 00:12:54.779789705 +0000 UTC m=+214.012780155" Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.677614 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad96bcad-395b-4844-9992-00acdf7436c2" containerID="f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7" exitCode=0 Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.677691 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerDied","Data":"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7"} Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.680712 4947 generic.go:334] "Generic (PLEG): container finished" podID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerID="5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147" exitCode=0 Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.680780 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerDied","Data":"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147"} Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.686361 4947 generic.go:334] "Generic (PLEG): container finished" podID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerID="df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985" exitCode=0 Jan 25 00:12:55 crc kubenswrapper[4947]: I0125 00:12:55.686396 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerDied","Data":"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985"} Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.709923 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerStarted","Data":"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289"} Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.714519 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerStarted","Data":"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686"} Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.718119 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerStarted","Data":"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14"} Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.740968 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-47m2l" podStartSLOduration=5.510123453 podStartE2EDuration="1m12.740941888s" podCreationTimestamp="2026-01-25 00:11:47 +0000 UTC" firstStartedPulling="2026-01-25 00:11:50.677154621 +0000 UTC m=+149.910145061" lastFinishedPulling="2026-01-25 00:12:57.907973046 +0000 UTC m=+217.140963496" observedRunningTime="2026-01-25 00:12:59.735852858 +0000 UTC m=+218.968843298" watchObservedRunningTime="2026-01-25 00:12:59.740941888 +0000 UTC m=+218.973932318" Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.758877 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4vzx6" podStartSLOduration=3.610270666 podStartE2EDuration="1m11.758850663s" podCreationTimestamp="2026-01-25 00:11:48 +0000 UTC" firstStartedPulling="2026-01-25 00:11:50.730790781 +0000 UTC m=+149.963781221" lastFinishedPulling="2026-01-25 00:12:58.879370778 +0000 UTC m=+218.112361218" observedRunningTime="2026-01-25 00:12:59.757362281 +0000 UTC m=+218.990352741" watchObservedRunningTime="2026-01-25 00:12:59.758850663 +0000 UTC m=+218.991841103" Jan 25 00:12:59 crc kubenswrapper[4947]: I0125 00:12:59.786764 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nmrkd" podStartSLOduration=3.693786688 podStartE2EDuration="1m11.786724972s" podCreationTimestamp="2026-01-25 00:11:48 +0000 UTC" firstStartedPulling="2026-01-25 00:11:50.717220924 +0000 UTC m=+149.950211354" lastFinishedPulling="2026-01-25 00:12:58.810159198 +0000 UTC m=+218.043149638" observedRunningTime="2026-01-25 00:12:59.784746428 +0000 UTC m=+219.017736898" watchObservedRunningTime="2026-01-25 00:12:59.786724972 +0000 UTC m=+219.019715412" Jan 25 00:13:00 crc kubenswrapper[4947]: I0125 00:13:00.360840 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:13:00 crc kubenswrapper[4947]: I0125 00:13:00.360904 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:13:00 crc kubenswrapper[4947]: I0125 00:13:00.427178 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:13:00 crc kubenswrapper[4947]: I0125 00:13:00.678519 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:13:00 crc kubenswrapper[4947]: I0125 00:13:00.782817 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.145677 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.145779 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.200743 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.391113 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.391165 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.440925 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.789463 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:13:01 crc kubenswrapper[4947]: I0125 00:13:01.811926 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:03 crc kubenswrapper[4947]: I0125 00:13:03.822043 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:13:03 crc kubenswrapper[4947]: I0125 00:13:03.822803 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2ckt7" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="registry-server" containerID="cri-o://317126b2d6e4f76d49bc8f58d4a6611c35dbaae31512dbd0e9d9b571103fb945" gracePeriod=2 Jan 25 00:13:04 crc kubenswrapper[4947]: I0125 00:13:04.021907 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:13:04 crc kubenswrapper[4947]: I0125 00:13:04.022324 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4w46p" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="registry-server" containerID="cri-o://cda1828998251daaa0955fcd8ff8f0139c55af4883ef564ee55bdf29defa9273" gracePeriod=2 Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.617705 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.617999 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" podUID="494402be-6a25-4b8d-a515-de9eba8f1d31" containerName="controller-manager" containerID="cri-o://1c028832216129582df30e9ae1e01392e885f0edd17b2831b6b48bd47fc1c5c0" gracePeriod=30 Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.700490 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.700855 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" podUID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" containerName="route-controller-manager" containerID="cri-o://6c1d7cc945b820b218fa75e06214182e9d3f500a0874845bdee9db322e9cef77" gracePeriod=30 Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.764713 4947 generic.go:334] "Generic (PLEG): container finished" podID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerID="cda1828998251daaa0955fcd8ff8f0139c55af4883ef564ee55bdf29defa9273" exitCode=0 Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.764793 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerDied","Data":"cda1828998251daaa0955fcd8ff8f0139c55af4883ef564ee55bdf29defa9273"} Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.767243 4947 generic.go:334] "Generic (PLEG): container finished" podID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerID="317126b2d6e4f76d49bc8f58d4a6611c35dbaae31512dbd0e9d9b571103fb945" exitCode=0 Jan 25 00:13:05 crc kubenswrapper[4947]: I0125 00:13:05.767289 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerDied","Data":"317126b2d6e4f76d49bc8f58d4a6611c35dbaae31512dbd0e9d9b571103fb945"} Jan 25 00:13:07 crc kubenswrapper[4947]: I0125 00:13:07.783000 4947 generic.go:334] "Generic (PLEG): container finished" podID="494402be-6a25-4b8d-a515-de9eba8f1d31" containerID="1c028832216129582df30e9ae1e01392e885f0edd17b2831b6b48bd47fc1c5c0" exitCode=0 Jan 25 00:13:07 crc kubenswrapper[4947]: I0125 00:13:07.783101 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" event={"ID":"494402be-6a25-4b8d-a515-de9eba8f1d31","Type":"ContainerDied","Data":"1c028832216129582df30e9ae1e01392e885f0edd17b2831b6b48bd47fc1c5c0"} Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.076731 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.206534 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities\") pod \"47cb5005-6286-4d5c-b654-65009ac6d3d9\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.206668 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xj6xh\" (UniqueName: \"kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh\") pod \"47cb5005-6286-4d5c-b654-65009ac6d3d9\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.206737 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content\") pod \"47cb5005-6286-4d5c-b654-65009ac6d3d9\" (UID: \"47cb5005-6286-4d5c-b654-65009ac6d3d9\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.208554 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities" (OuterVolumeSpecName: "utilities") pod "47cb5005-6286-4d5c-b654-65009ac6d3d9" (UID: "47cb5005-6286-4d5c-b654-65009ac6d3d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.215468 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh" (OuterVolumeSpecName: "kube-api-access-xj6xh") pod "47cb5005-6286-4d5c-b654-65009ac6d3d9" (UID: "47cb5005-6286-4d5c-b654-65009ac6d3d9"). InnerVolumeSpecName "kube-api-access-xj6xh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.230065 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "47cb5005-6286-4d5c-b654-65009ac6d3d9" (UID: "47cb5005-6286-4d5c-b654-65009ac6d3d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.308104 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.308152 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xj6xh\" (UniqueName: \"kubernetes.io/projected/47cb5005-6286-4d5c-b654-65009ac6d3d9-kube-api-access-xj6xh\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.308164 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/47cb5005-6286-4d5c-b654-65009ac6d3d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.410083 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.410159 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.413252 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.415411 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.468986 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.469892 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.592299 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.593224 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.611699 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.653622 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.719898 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content\") pod \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.720614 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities\") pod \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.720827 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fvpv\" (UniqueName: \"kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv\") pod \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\" (UID: \"57fceeaa-414d-4570-98fb-2b8a06a7d3bb\") " Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.721641 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities" (OuterVolumeSpecName: "utilities") pod "57fceeaa-414d-4570-98fb-2b8a06a7d3bb" (UID: "57fceeaa-414d-4570-98fb-2b8a06a7d3bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.728963 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv" (OuterVolumeSpecName: "kube-api-access-9fvpv") pod "57fceeaa-414d-4570-98fb-2b8a06a7d3bb" (UID: "57fceeaa-414d-4570-98fb-2b8a06a7d3bb"). InnerVolumeSpecName "kube-api-access-9fvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.782262 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmsjj"] Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.810760 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4w46p" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.810902 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4w46p" event={"ID":"57fceeaa-414d-4570-98fb-2b8a06a7d3bb","Type":"ContainerDied","Data":"6ff6fe5b753697e94b7c8e5ddcb3516739a88d3938d849311b89961974eb03c2"} Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.810974 4947 scope.go:117] "RemoveContainer" containerID="cda1828998251daaa0955fcd8ff8f0139c55af4883ef564ee55bdf29defa9273" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.822702 4947 generic.go:334] "Generic (PLEG): container finished" podID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" containerID="6c1d7cc945b820b218fa75e06214182e9d3f500a0874845bdee9db322e9cef77" exitCode=0 Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.822977 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" event={"ID":"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47","Type":"ContainerDied","Data":"6c1d7cc945b820b218fa75e06214182e9d3f500a0874845bdee9db322e9cef77"} Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.823667 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.823695 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fvpv\" (UniqueName: \"kubernetes.io/projected/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-kube-api-access-9fvpv\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.834947 4947 scope.go:117] "RemoveContainer" containerID="7d1970bbd2b42fe7877661c52dbc7fbf441aa3e65a5ef8dfe46f639caa2e9c08" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.854469 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2ckt7" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.856223 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2ckt7" event={"ID":"47cb5005-6286-4d5c-b654-65009ac6d3d9","Type":"ContainerDied","Data":"401542bfadee8c47bb521dc0eb21357c2ec3d46cc246c1c4b0d9b2d89d6fbbe2"} Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.881339 4947 scope.go:117] "RemoveContainer" containerID="5b659f6a3287d4d66a77d57b5ec03b9728a52bc6ef42a979eeaaf06156f4c4a0" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.910693 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.915524 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.920542 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2ckt7"] Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.929578 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.940792 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.941169 4947 scope.go:117] "RemoveContainer" containerID="317126b2d6e4f76d49bc8f58d4a6611c35dbaae31512dbd0e9d9b571103fb945" Jan 25 00:13:08 crc kubenswrapper[4947]: I0125 00:13:08.957757 4947 scope.go:117] "RemoveContainer" containerID="f0e8d23d69c130c02800100eadb68b2223ac406b1ec604b6d679944a469596fc" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.015733 4947 scope.go:117] "RemoveContainer" containerID="2f94cab6a2a710126f5b6870bb3f746028f1b033d9975bad0560d251170fc46f" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.101076 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" path="/var/lib/kubelet/pods/47cb5005-6286-4d5c-b654-65009ac6d3d9/volumes" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.546726 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.652671 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca\") pod \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.652750 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert\") pod \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.653226 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca" (OuterVolumeSpecName: "client-ca") pod "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" (UID: "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.653772 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config\") pod \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.653846 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nrzk\" (UniqueName: \"kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk\") pod \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\" (UID: \"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.654066 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.654403 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config" (OuterVolumeSpecName: "config") pod "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" (UID: "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.657654 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" (UID: "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.658068 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk" (OuterVolumeSpecName: "kube-api-access-8nrzk") pod "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" (UID: "d1842ab3-9eb3-4aa3-b77f-ee74e120fe47"). InnerVolumeSpecName "kube-api-access-8nrzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.687205 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.755454 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6fvp\" (UniqueName: \"kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp\") pod \"494402be-6a25-4b8d-a515-de9eba8f1d31\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.755541 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert\") pod \"494402be-6a25-4b8d-a515-de9eba8f1d31\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.755575 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles\") pod \"494402be-6a25-4b8d-a515-de9eba8f1d31\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.757334 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config\") pod \"494402be-6a25-4b8d-a515-de9eba8f1d31\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.757369 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca\") pod \"494402be-6a25-4b8d-a515-de9eba8f1d31\" (UID: \"494402be-6a25-4b8d-a515-de9eba8f1d31\") " Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.757778 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nrzk\" (UniqueName: \"kubernetes.io/projected/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-kube-api-access-8nrzk\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.757800 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.757811 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.756648 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "494402be-6a25-4b8d-a515-de9eba8f1d31" (UID: "494402be-6a25-4b8d-a515-de9eba8f1d31"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.758281 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca" (OuterVolumeSpecName: "client-ca") pod "494402be-6a25-4b8d-a515-de9eba8f1d31" (UID: "494402be-6a25-4b8d-a515-de9eba8f1d31"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.758668 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config" (OuterVolumeSpecName: "config") pod "494402be-6a25-4b8d-a515-de9eba8f1d31" (UID: "494402be-6a25-4b8d-a515-de9eba8f1d31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.760767 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "494402be-6a25-4b8d-a515-de9eba8f1d31" (UID: "494402be-6a25-4b8d-a515-de9eba8f1d31"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.760889 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp" (OuterVolumeSpecName: "kube-api-access-w6fvp") pod "494402be-6a25-4b8d-a515-de9eba8f1d31" (UID: "494402be-6a25-4b8d-a515-de9eba8f1d31"). InnerVolumeSpecName "kube-api-access-w6fvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.859844 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.859895 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.859912 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6fvp\" (UniqueName: \"kubernetes.io/projected/494402be-6a25-4b8d-a515-de9eba8f1d31-kube-api-access-w6fvp\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.859939 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/494402be-6a25-4b8d-a515-de9eba8f1d31-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.859950 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/494402be-6a25-4b8d-a515-de9eba8f1d31-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.862427 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.862582 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t" event={"ID":"d1842ab3-9eb3-4aa3-b77f-ee74e120fe47","Type":"ContainerDied","Data":"7fa469b1f93c6d5d71121e93f8cc7b379adfec6c3fbf576d48d60d1ba1c8315e"} Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.862675 4947 scope.go:117] "RemoveContainer" containerID="6c1d7cc945b820b218fa75e06214182e9d3f500a0874845bdee9db322e9cef77" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.869736 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.869714 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5687556c4c-8vd78" event={"ID":"494402be-6a25-4b8d-a515-de9eba8f1d31","Type":"ContainerDied","Data":"4c061a35ba34d1fb75aea276081e1c5742b11fc541423a49284514c05ed48d3b"} Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.899604 4947 scope.go:117] "RemoveContainer" containerID="1c028832216129582df30e9ae1e01392e885f0edd17b2831b6b48bd47fc1c5c0" Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.918166 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.922096 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5687556c4c-8vd78"] Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.937821 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:13:09 crc kubenswrapper[4947]: I0125 00:13:09.940433 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5554874b69-jz72t"] Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.213949 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224370 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224416 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224441 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="extract-content" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224451 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="extract-content" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224465 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="extract-utilities" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224475 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="extract-utilities" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224491 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e4f53a6-fcc3-4310-965d-9a5dda91080b" containerName="image-pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224500 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e4f53a6-fcc3-4310-965d-9a5dda91080b" containerName="image-pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224511 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="extract-utilities" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224520 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="extract-utilities" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224537 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="extract-content" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224546 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="extract-content" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224561 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d05abe-f768-43d7-abf4-0a7a4e36c37e" containerName="pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224569 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d05abe-f768-43d7-abf4-0a7a4e36c37e" containerName="pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224586 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="494402be-6a25-4b8d-a515-de9eba8f1d31" containerName="controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224595 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="494402be-6a25-4b8d-a515-de9eba8f1d31" containerName="controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224607 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" containerName="route-controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224616 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" containerName="route-controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: E0125 00:13:10.224625 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.224633 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225010 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225026 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d05abe-f768-43d7-abf4-0a7a4e36c37e" containerName="pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225042 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="494402be-6a25-4b8d-a515-de9eba8f1d31" containerName="controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225051 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" containerName="route-controller-manager" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225063 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e4f53a6-fcc3-4310-965d-9a5dda91080b" containerName="image-pruner" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225078 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="47cb5005-6286-4d5c-b654-65009ac6d3d9" containerName="registry-server" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.225573 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.226118 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.226236 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.226748 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.231737 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.232221 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.232566 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.232728 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.237926 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.238259 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.238409 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.238559 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.238849 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.239016 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.239161 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.239387 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.249008 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.251586 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.267400 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.267451 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.267487 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.267811 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.267918 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.268020 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7j5\" (UniqueName: \"kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.268190 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.268317 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.268410 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88fpz\" (UniqueName: \"kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370350 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370425 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370458 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7j5\" (UniqueName: \"kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370504 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370528 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370555 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88fpz\" (UniqueName: \"kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370582 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370608 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.370634 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.372281 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.372400 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.372950 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.374253 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.378333 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.378609 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.393753 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w7j5\" (UniqueName: \"kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5\") pod \"controller-manager-bf7b4c587-6xl8t\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.396860 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88fpz\" (UniqueName: \"kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.587554 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.824395 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config\") pod \"route-controller-manager-785b84857-vwtg7\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:10 crc kubenswrapper[4947]: I0125 00:13:10.876274 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.012846 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:13:11 crc kubenswrapper[4947]: W0125 00:13:11.029963 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod312d7f0a_f809_4e61_9ddd_46a5328b297c.slice/crio-d19e2f3886646df15934b672a149e21934ab3e0f233ab59f1b6c04edeba54de9 WatchSource:0}: Error finding container d19e2f3886646df15934b672a149e21934ab3e0f233ab59f1b6c04edeba54de9: Status 404 returned error can't find the container with id d19e2f3886646df15934b672a149e21934ab3e0f233ab59f1b6c04edeba54de9 Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.097725 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="494402be-6a25-4b8d-a515-de9eba8f1d31" path="/var/lib/kubelet/pods/494402be-6a25-4b8d-a515-de9eba8f1d31/volumes" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.098801 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1842ab3-9eb3-4aa3-b77f-ee74e120fe47" path="/var/lib/kubelet/pods/d1842ab3-9eb3-4aa3-b77f-ee74e120fe47/volumes" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.337648 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:13:11 crc kubenswrapper[4947]: W0125 00:13:11.355816 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod507c2587_a4ff_48cd_8740_800f9e614c65.slice/crio-e3712692a024a59d8c1f6eeedf100b94d93a2bc93180129c8667faed18062f64 WatchSource:0}: Error finding container e3712692a024a59d8c1f6eeedf100b94d93a2bc93180129c8667faed18062f64: Status 404 returned error can't find the container with id e3712692a024a59d8c1f6eeedf100b94d93a2bc93180129c8667faed18062f64 Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.450387 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57fceeaa-414d-4570-98fb-2b8a06a7d3bb" (UID: "57fceeaa-414d-4570-98fb-2b8a06a7d3bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.489398 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57fceeaa-414d-4570-98fb-2b8a06a7d3bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.563433 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.573584 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4w46p"] Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.888627 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" event={"ID":"507c2587-a4ff-48cd-8740-800f9e614c65","Type":"ContainerStarted","Data":"767798f91b016414bc55cbaa8ecb99f3eaea02413cb909d3fa8466c832339705"} Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.888686 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.888700 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" event={"ID":"507c2587-a4ff-48cd-8740-800f9e614c65","Type":"ContainerStarted","Data":"e3712692a024a59d8c1f6eeedf100b94d93a2bc93180129c8667faed18062f64"} Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.893477 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" event={"ID":"312d7f0a-f809-4e61-9ddd-46a5328b297c","Type":"ContainerStarted","Data":"16bf48c77027103a40b98bc529ff47a76d184e9a4ff844a9962a12f32fb22999"} Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.893546 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" event={"ID":"312d7f0a-f809-4e61-9ddd-46a5328b297c","Type":"ContainerStarted","Data":"d19e2f3886646df15934b672a149e21934ab3e0f233ab59f1b6c04edeba54de9"} Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.893729 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.899981 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.918064 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" podStartSLOduration=6.91803713 podStartE2EDuration="6.91803713s" podCreationTimestamp="2026-01-25 00:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:13:11.914270636 +0000 UTC m=+231.147261076" watchObservedRunningTime="2026-01-25 00:13:11.91803713 +0000 UTC m=+231.151027570" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.924367 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:13:11 crc kubenswrapper[4947]: I0125 00:13:11.936208 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" podStartSLOduration=6.936179481 podStartE2EDuration="6.936179481s" podCreationTimestamp="2026-01-25 00:13:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:13:11.932400327 +0000 UTC m=+231.165390757" watchObservedRunningTime="2026-01-25 00:13:11.936179481 +0000 UTC m=+231.169169921" Jan 25 00:13:12 crc kubenswrapper[4947]: I0125 00:13:12.616334 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:13:12 crc kubenswrapper[4947]: I0125 00:13:12.616698 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nmrkd" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="registry-server" containerID="cri-o://3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686" gracePeriod=2 Jan 25 00:13:12 crc kubenswrapper[4947]: I0125 00:13:12.818633 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:13:12 crc kubenswrapper[4947]: I0125 00:13:12.819072 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4vzx6" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="registry-server" containerID="cri-o://22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14" gracePeriod=2 Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.099195 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57fceeaa-414d-4570-98fb-2b8a06a7d3bb" path="/var/lib/kubelet/pods/57fceeaa-414d-4570-98fb-2b8a06a7d3bb/volumes" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.821327 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.827885 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.921280 4947 generic.go:334] "Generic (PLEG): container finished" podID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerID="3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686" exitCode=0 Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.921340 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerDied","Data":"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686"} Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.921378 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmrkd" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.921408 4947 scope.go:117] "RemoveContainer" containerID="3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.921394 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmrkd" event={"ID":"8631ec11-9ab2-4799-b57c-0a346ec69767","Type":"ContainerDied","Data":"7d6ebf3601605e6c873a327cc838407e459ee58147699177b0740d46b1d7aedf"} Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.924382 4947 generic.go:334] "Generic (PLEG): container finished" podID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerID="22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14" exitCode=0 Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.924468 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vzx6" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.924501 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerDied","Data":"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14"} Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.924537 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vzx6" event={"ID":"4fbe2fc7-f0a5-439c-988c-d034d3da6add","Type":"ContainerDied","Data":"e4fc08944b569f65f472ef5d6a0000744c15a40d1962fcdb333c93ea9560dbba"} Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.940227 4947 scope.go:117] "RemoveContainer" containerID="5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.957940 4947 scope.go:117] "RemoveContainer" containerID="ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.979065 4947 scope.go:117] "RemoveContainer" containerID="3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686" Jan 25 00:13:13 crc kubenswrapper[4947]: E0125 00:13:13.979608 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686\": container with ID starting with 3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686 not found: ID does not exist" containerID="3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.979641 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686"} err="failed to get container status \"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686\": rpc error: code = NotFound desc = could not find container \"3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686\": container with ID starting with 3479ef8ea86ed5aed24831846b55d2a592235e2a9ef11acc31185ab9e1a53686 not found: ID does not exist" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.979670 4947 scope.go:117] "RemoveContainer" containerID="5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147" Jan 25 00:13:13 crc kubenswrapper[4947]: E0125 00:13:13.979981 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147\": container with ID starting with 5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147 not found: ID does not exist" containerID="5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.980036 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147"} err="failed to get container status \"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147\": rpc error: code = NotFound desc = could not find container \"5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147\": container with ID starting with 5fe512aead52ea5a24493594687649dde048b801b45e7de2674ff17753b04147 not found: ID does not exist" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.980073 4947 scope.go:117] "RemoveContainer" containerID="ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0" Jan 25 00:13:13 crc kubenswrapper[4947]: E0125 00:13:13.980365 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0\": container with ID starting with ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0 not found: ID does not exist" containerID="ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.980388 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0"} err="failed to get container status \"ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0\": rpc error: code = NotFound desc = could not find container \"ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0\": container with ID starting with ffcce6eeb0e69e798c8e9df8d5c6434f6e64194dfff77e93d2a985b4528474b0 not found: ID does not exist" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.980400 4947 scope.go:117] "RemoveContainer" containerID="22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14" Jan 25 00:13:13 crc kubenswrapper[4947]: I0125 00:13:13.997873 4947 scope.go:117] "RemoveContainer" containerID="df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.014065 4947 scope.go:117] "RemoveContainer" containerID="35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.021266 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities\") pod \"8631ec11-9ab2-4799-b57c-0a346ec69767\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.021332 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content\") pod \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.021377 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lszc2\" (UniqueName: \"kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2\") pod \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.021930 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities\") pod \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\" (UID: \"4fbe2fc7-f0a5-439c-988c-d034d3da6add\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.021984 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfhzb\" (UniqueName: \"kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb\") pod \"8631ec11-9ab2-4799-b57c-0a346ec69767\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.022013 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content\") pod \"8631ec11-9ab2-4799-b57c-0a346ec69767\" (UID: \"8631ec11-9ab2-4799-b57c-0a346ec69767\") " Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.027438 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities" (OuterVolumeSpecName: "utilities") pod "8631ec11-9ab2-4799-b57c-0a346ec69767" (UID: "8631ec11-9ab2-4799-b57c-0a346ec69767"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.028700 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb" (OuterVolumeSpecName: "kube-api-access-tfhzb") pod "8631ec11-9ab2-4799-b57c-0a346ec69767" (UID: "8631ec11-9ab2-4799-b57c-0a346ec69767"). InnerVolumeSpecName "kube-api-access-tfhzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.028853 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2" (OuterVolumeSpecName: "kube-api-access-lszc2") pod "4fbe2fc7-f0a5-439c-988c-d034d3da6add" (UID: "4fbe2fc7-f0a5-439c-988c-d034d3da6add"). InnerVolumeSpecName "kube-api-access-lszc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.029303 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities" (OuterVolumeSpecName: "utilities") pod "4fbe2fc7-f0a5-439c-988c-d034d3da6add" (UID: "4fbe2fc7-f0a5-439c-988c-d034d3da6add"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.057860 4947 scope.go:117] "RemoveContainer" containerID="22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14" Jan 25 00:13:14 crc kubenswrapper[4947]: E0125 00:13:14.058519 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14\": container with ID starting with 22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14 not found: ID does not exist" containerID="22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.058573 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14"} err="failed to get container status \"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14\": rpc error: code = NotFound desc = could not find container \"22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14\": container with ID starting with 22c7ad1b6fc7e1b573a4edb484a50e425db99597935d2e89206ac4738f31db14 not found: ID does not exist" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.058607 4947 scope.go:117] "RemoveContainer" containerID="df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985" Jan 25 00:13:14 crc kubenswrapper[4947]: E0125 00:13:14.059720 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985\": container with ID starting with df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985 not found: ID does not exist" containerID="df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.059764 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985"} err="failed to get container status \"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985\": rpc error: code = NotFound desc = could not find container \"df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985\": container with ID starting with df53c6d3951ba0b238c3365e425e80c0a7a77e349ffdbd43a64e089db4db9985 not found: ID does not exist" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.059796 4947 scope.go:117] "RemoveContainer" containerID="35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25" Jan 25 00:13:14 crc kubenswrapper[4947]: E0125 00:13:14.061485 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25\": container with ID starting with 35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25 not found: ID does not exist" containerID="35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.061512 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25"} err="failed to get container status \"35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25\": rpc error: code = NotFound desc = could not find container \"35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25\": container with ID starting with 35a44afdfef807235aaca7df67ffffbacabda8facded58f94de250d11a4d2a25 not found: ID does not exist" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.080496 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8631ec11-9ab2-4799-b57c-0a346ec69767" (UID: "8631ec11-9ab2-4799-b57c-0a346ec69767"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.085109 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fbe2fc7-f0a5-439c-988c-d034d3da6add" (UID: "4fbe2fc7-f0a5-439c-988c-d034d3da6add"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123803 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123839 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfhzb\" (UniqueName: \"kubernetes.io/projected/8631ec11-9ab2-4799-b57c-0a346ec69767-kube-api-access-tfhzb\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123852 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123862 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8631ec11-9ab2-4799-b57c-0a346ec69767-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123870 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fbe2fc7-f0a5-439c-988c-d034d3da6add-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.123879 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lszc2\" (UniqueName: \"kubernetes.io/projected/4fbe2fc7-f0a5-439c-988c-d034d3da6add-kube-api-access-lszc2\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.259979 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.263320 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nmrkd"] Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.270632 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:13:14 crc kubenswrapper[4947]: I0125 00:13:14.273869 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4vzx6"] Jan 25 00:13:15 crc kubenswrapper[4947]: I0125 00:13:15.100510 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" path="/var/lib/kubelet/pods/4fbe2fc7-f0a5-439c-988c-d034d3da6add/volumes" Jan 25 00:13:15 crc kubenswrapper[4947]: I0125 00:13:15.103449 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" path="/var/lib/kubelet/pods/8631ec11-9ab2-4799-b57c-0a346ec69767/volumes" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.015057 4947 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.015695 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="extract-content" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.015848 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="extract-content" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.015887 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="extract-content" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.015901 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="extract-content" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.015932 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.015946 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.015963 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="extract-utilities" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.015995 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="extract-utilities" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.016030 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.016048 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.016081 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="extract-utilities" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.016094 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="extract-utilities" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.016565 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8631ec11-9ab2-4799-b57c-0a346ec69767" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.016610 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fbe2fc7-f0a5-439c-988c-d034d3da6add" containerName="registry-server" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.017430 4947 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.017690 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.017998 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42" gracePeriod=15 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.018078 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59" gracePeriod=15 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.018210 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c" gracePeriod=15 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.018109 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278" gracePeriod=15 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.018171 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2" gracePeriod=15 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.018909 4947 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019303 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019328 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019349 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019361 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019377 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019391 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019405 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019417 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019440 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019453 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019469 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019481 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.019496 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019508 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019736 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019765 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019788 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019805 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019821 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.019838 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.068355 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.149487 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.149573 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150079 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150118 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150149 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150167 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150227 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.150253 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251297 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251347 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251373 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251416 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251450 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251470 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251473 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251514 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251482 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251493 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251575 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251612 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251654 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251708 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251738 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.251768 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.365324 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:13:16 crc kubenswrapper[4947]: W0125 00:13:16.391381 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-d7349a1a4be8bfef6353d3f7647578bb21403b6708364430653b30bd7c8f97d9 WatchSource:0}: Error finding container d7349a1a4be8bfef6353d3f7647578bb21403b6708364430653b30bd7c8f97d9: Status 404 returned error can't find the container with id d7349a1a4be8bfef6353d3f7647578bb21403b6708364430653b30bd7c8f97d9 Jan 25 00:13:16 crc kubenswrapper[4947]: E0125 00:13:16.402080 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188dd0f8fa88b90c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 00:13:16.400204044 +0000 UTC m=+235.633194494,LastTimestamp:2026-01-25 00:13:16.400204044 +0000 UTC m=+235.633194494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.947758 4947 generic.go:334] "Generic (PLEG): container finished" podID="3305e0ba-7064-415c-bbaa-bdc630d95e40" containerID="34c655a70626cb0470c8341f4426a959f7be73c9bd302d5c7f42a0999b60f186" exitCode=0 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.947859 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3305e0ba-7064-415c-bbaa-bdc630d95e40","Type":"ContainerDied","Data":"34c655a70626cb0470c8341f4426a959f7be73c9bd302d5c7f42a0999b60f186"} Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.949843 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.950529 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"87af76d9cedf9765995fdba192251417d6cb96cc8dbaac5f8d89ebd77523cb24"} Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.950587 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"d7349a1a4be8bfef6353d3f7647578bb21403b6708364430653b30bd7c8f97d9"} Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.950630 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.951427 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.952056 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.952514 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.953307 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.953574 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.955698 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.956638 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59" exitCode=0 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.956670 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c" exitCode=0 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.956680 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278" exitCode=0 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.956688 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2" exitCode=2 Jan 25 00:13:16 crc kubenswrapper[4947]: I0125 00:13:16.956741 4947 scope.go:117] "RemoveContainer" containerID="86be715cf759c89eedc1cf176cf93473ac02cebed2f9ac9573f93faf1395b587" Jan 25 00:13:17 crc kubenswrapper[4947]: I0125 00:13:17.968872 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.355841 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.356776 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.357383 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.485787 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access\") pod \"3305e0ba-7064-415c-bbaa-bdc630d95e40\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.485871 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock\") pod \"3305e0ba-7064-415c-bbaa-bdc630d95e40\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.485952 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir\") pod \"3305e0ba-7064-415c-bbaa-bdc630d95e40\" (UID: \"3305e0ba-7064-415c-bbaa-bdc630d95e40\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.486464 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock" (OuterVolumeSpecName: "var-lock") pod "3305e0ba-7064-415c-bbaa-bdc630d95e40" (UID: "3305e0ba-7064-415c-bbaa-bdc630d95e40"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.486624 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3305e0ba-7064-415c-bbaa-bdc630d95e40" (UID: "3305e0ba-7064-415c-bbaa-bdc630d95e40"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.488018 4947 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.488045 4947 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3305e0ba-7064-415c-bbaa-bdc630d95e40-var-lock\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.493754 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3305e0ba-7064-415c-bbaa-bdc630d95e40" (UID: "3305e0ba-7064-415c-bbaa-bdc630d95e40"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.497554 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.499530 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.500353 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.501062 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.501581 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.589579 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3305e0ba-7064-415c-bbaa-bdc630d95e40-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.645501 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:13:18Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:13:18Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:13:18Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-25T00:13:18Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.646057 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.646631 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.647061 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.647552 4947 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:18 crc kubenswrapper[4947]: E0125 00:13:18.647674 4947 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.690858 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.690992 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.690986 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691072 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691165 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691193 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691656 4947 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691713 4947 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.691730 4947 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.981875 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"3305e0ba-7064-415c-bbaa-bdc630d95e40","Type":"ContainerDied","Data":"c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c"} Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.982049 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c860d5f033f7c7c376e626eb2cc279e42901878e52850a69646c280979b6502c" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.982074 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.987735 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.989122 4947 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42" exitCode=0 Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.989289 4947 scope.go:117] "RemoveContainer" containerID="7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59" Jan 25 00:13:18 crc kubenswrapper[4947]: I0125 00:13:18.989401 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.006220 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.006795 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.007293 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.017411 4947 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.018607 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.019098 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.022261 4947 scope.go:117] "RemoveContainer" containerID="bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.044051 4947 scope.go:117] "RemoveContainer" containerID="872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.069759 4947 scope.go:117] "RemoveContainer" containerID="d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.090092 4947 scope.go:117] "RemoveContainer" containerID="6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.106519 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.124364 4947 scope.go:117] "RemoveContainer" containerID="050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.171294 4947 scope.go:117] "RemoveContainer" containerID="7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.172435 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\": container with ID starting with 7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59 not found: ID does not exist" containerID="7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.172501 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59"} err="failed to get container status \"7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\": rpc error: code = NotFound desc = could not find container \"7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59\": container with ID starting with 7018b9816aaead206ec4c0eeafdcfd03299fc14f8c3c829094ce93605de56d59 not found: ID does not exist" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.172532 4947 scope.go:117] "RemoveContainer" containerID="bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.173211 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\": container with ID starting with bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c not found: ID does not exist" containerID="bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.173900 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c"} err="failed to get container status \"bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\": rpc error: code = NotFound desc = could not find container \"bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c\": container with ID starting with bba937dc900f26403383db943b8224568ac4b4100908d0a923d4b651c574a62c not found: ID does not exist" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.174073 4947 scope.go:117] "RemoveContainer" containerID="872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.174585 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\": container with ID starting with 872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278 not found: ID does not exist" containerID="872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.174677 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278"} err="failed to get container status \"872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\": rpc error: code = NotFound desc = could not find container \"872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278\": container with ID starting with 872dd0931688a428c0914e5dd1c869787289dd4e5341825f0660a718c807a278 not found: ID does not exist" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.174729 4947 scope.go:117] "RemoveContainer" containerID="d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.175322 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\": container with ID starting with d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2 not found: ID does not exist" containerID="d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.175385 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2"} err="failed to get container status \"d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\": rpc error: code = NotFound desc = could not find container \"d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2\": container with ID starting with d5a86bb020a281b0a0aec1865d9fcb1cf9e7553a087b0e483d3ab55bdb5143e2 not found: ID does not exist" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.175426 4947 scope.go:117] "RemoveContainer" containerID="6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.175840 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\": container with ID starting with 6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42 not found: ID does not exist" containerID="6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.175875 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42"} err="failed to get container status \"6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\": rpc error: code = NotFound desc = could not find container \"6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42\": container with ID starting with 6d4e51576c1dccef0200fd8d230fd403d20560cae574c355295d6fde8f5c0e42 not found: ID does not exist" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.175892 4947 scope.go:117] "RemoveContainer" containerID="050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923" Jan 25 00:13:19 crc kubenswrapper[4947]: E0125 00:13:19.176693 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\": container with ID starting with 050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923 not found: ID does not exist" containerID="050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923" Jan 25 00:13:19 crc kubenswrapper[4947]: I0125 00:13:19.176742 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923"} err="failed to get container status \"050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\": rpc error: code = NotFound desc = could not find container \"050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923\": container with ID starting with 050370630c354564a41da4e21b602c6c97c45096c341b165971068042e1fa923 not found: ID does not exist" Jan 25 00:13:20 crc kubenswrapper[4947]: E0125 00:13:20.123729 4947 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" volumeName="registry-storage" Jan 25 00:13:21 crc kubenswrapper[4947]: I0125 00:13:21.096245 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:21 crc kubenswrapper[4947]: I0125 00:13:21.096813 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.289646 4947 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188dd0f8fa88b90c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-25 00:13:16.400204044 +0000 UTC m=+235.633194494,LastTimestamp:2026-01-25 00:13:16.400204044 +0000 UTC m=+235.633194494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.593271 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.594356 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.594828 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.595230 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.595797 4947 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:24 crc kubenswrapper[4947]: I0125 00:13:24.596011 4947 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.596706 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Jan 25 00:13:24 crc kubenswrapper[4947]: E0125 00:13:24.797921 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Jan 25 00:13:25 crc kubenswrapper[4947]: E0125 00:13:25.199595 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Jan 25 00:13:26 crc kubenswrapper[4947]: E0125 00:13:26.000305 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Jan 25 00:13:27 crc kubenswrapper[4947]: E0125 00:13:27.601611 4947 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.089401 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.090670 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.091316 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.104815 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.104846 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:28 crc kubenswrapper[4947]: E0125 00:13:28.105462 4947 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:28 crc kubenswrapper[4947]: I0125 00:13:28.106339 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:28 crc kubenswrapper[4947]: W0125 00:13:28.139297 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-9ff1025610756624468ac546412d5d6e85e7f5196883f821e55f451c3d3b2c67 WatchSource:0}: Error finding container 9ff1025610756624468ac546412d5d6e85e7f5196883f821e55f451c3d3b2c67: Status 404 returned error can't find the container with id 9ff1025610756624468ac546412d5d6e85e7f5196883f821e55f451c3d3b2c67 Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.072898 4947 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="dfb019c9fcd82e8237372312156ac571e3f5b36d6e459883bdb86a596fe52237" exitCode=0 Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.072951 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"dfb019c9fcd82e8237372312156ac571e3f5b36d6e459883bdb86a596fe52237"} Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.072982 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9ff1025610756624468ac546412d5d6e85e7f5196883f821e55f451c3d3b2c67"} Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.073985 4947 status_manager.go:851] "Failed to get status for pod" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.074553 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.074591 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:29 crc kubenswrapper[4947]: I0125 00:13:29.074852 4947 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Jan 25 00:13:29 crc kubenswrapper[4947]: E0125 00:13:29.075173 4947 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:30 crc kubenswrapper[4947]: I0125 00:13:30.087296 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 25 00:13:30 crc kubenswrapper[4947]: I0125 00:13:30.087380 4947 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567" exitCode=1 Jan 25 00:13:30 crc kubenswrapper[4947]: I0125 00:13:30.087465 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567"} Jan 25 00:13:30 crc kubenswrapper[4947]: I0125 00:13:30.088184 4947 scope.go:117] "RemoveContainer" containerID="1cd033c6016dd7d8b12e908aedd47b8b7c6864c30b5a610db6832b8d46547567" Jan 25 00:13:30 crc kubenswrapper[4947]: I0125 00:13:30.090051 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c1ba21ed850a24e4512b9980a2c4aaefe34c0b158660d58e36f78ba3d218751d"} Jan 25 00:13:31 crc kubenswrapper[4947]: I0125 00:13:31.113918 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 25 00:13:31 crc kubenswrapper[4947]: I0125 00:13:31.114347 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e4fec833f5a38deaeead8a4f9ba077b2a44f467be5217a53cffae2b4724a66a7"} Jan 25 00:13:31 crc kubenswrapper[4947]: I0125 00:13:31.133492 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"02b87fc3b28accbe7a0acf36d867789cc0865b40bd69e86e3c2f0a70bbcffa30"} Jan 25 00:13:31 crc kubenswrapper[4947]: I0125 00:13:31.133566 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0d2a1a8fefb5290b4c28a33ea0be3555b6b93c7669c004491ea9696c7742add0"} Jan 25 00:13:32 crc kubenswrapper[4947]: I0125 00:13:32.145880 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ee5793a080897cce8f43b1b563bed74f401771063b1a7cb9528535aa36cfcce5"} Jan 25 00:13:32 crc kubenswrapper[4947]: I0125 00:13:32.145926 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e49d4df16b5ad093711504292d719516cb304463fa12d438cfd29daa84d0fa89"} Jan 25 00:13:32 crc kubenswrapper[4947]: I0125 00:13:32.146078 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:32 crc kubenswrapper[4947]: I0125 00:13:32.146191 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:32 crc kubenswrapper[4947]: I0125 00:13:32.146220 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:33 crc kubenswrapper[4947]: I0125 00:13:33.106614 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:33 crc kubenswrapper[4947]: I0125 00:13:33.107061 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:33 crc kubenswrapper[4947]: I0125 00:13:33.112221 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:33 crc kubenswrapper[4947]: I0125 00:13:33.828039 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" podUID="d3a733c1-a1cf-42ef-a056-27185292354f" containerName="oauth-openshift" containerID="cri-o://e90e55f834d170a9f2751b73e12c28d7d7c3fcc4793df05b84ce36f32d19cba4" gracePeriod=15 Jan 25 00:13:34 crc kubenswrapper[4947]: I0125 00:13:34.160975 4947 generic.go:334] "Generic (PLEG): container finished" podID="d3a733c1-a1cf-42ef-a056-27185292354f" containerID="e90e55f834d170a9f2751b73e12c28d7d7c3fcc4793df05b84ce36f32d19cba4" exitCode=0 Jan 25 00:13:34 crc kubenswrapper[4947]: I0125 00:13:34.161038 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" event={"ID":"d3a733c1-a1cf-42ef-a056-27185292354f","Type":"ContainerDied","Data":"e90e55f834d170a9f2751b73e12c28d7d7c3fcc4793df05b84ce36f32d19cba4"} Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.103523 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124356 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124411 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124435 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124461 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124489 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124513 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124540 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124589 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124635 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbbs4\" (UniqueName: \"kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124691 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124725 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124757 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124796 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124841 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert\") pod \"d3a733c1-a1cf-42ef-a056-27185292354f\" (UID: \"d3a733c1-a1cf-42ef-a056-27185292354f\") " Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.124882 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.125086 4947 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d3a733c1-a1cf-42ef-a056-27185292354f-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.125526 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.125548 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.125574 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.126335 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.130701 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.131561 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.133870 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4" (OuterVolumeSpecName: "kube-api-access-fbbs4") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "kube-api-access-fbbs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.134334 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.137844 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.138111 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.138188 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.138947 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.139153 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d3a733c1-a1cf-42ef-a056-27185292354f" (UID: "d3a733c1-a1cf-42ef-a056-27185292354f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.172475 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" event={"ID":"d3a733c1-a1cf-42ef-a056-27185292354f","Type":"ContainerDied","Data":"9fddec1b4c50133c39595d5ae85373dbb93cca3db14bf1f44dacfede0073d88d"} Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.172514 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-dmsjj" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.172572 4947 scope.go:117] "RemoveContainer" containerID="e90e55f834d170a9f2751b73e12c28d7d7c3fcc4793df05b84ce36f32d19cba4" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.226921 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.226989 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227048 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227058 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227068 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227078 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227088 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227099 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227116 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227142 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbbs4\" (UniqueName: \"kubernetes.io/projected/d3a733c1-a1cf-42ef-a056-27185292354f-kube-api-access-fbbs4\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227154 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227163 4947 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d3a733c1-a1cf-42ef-a056-27185292354f-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:35 crc kubenswrapper[4947]: I0125 00:13:35.227174 4947 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d3a733c1-a1cf-42ef-a056-27185292354f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 25 00:13:36 crc kubenswrapper[4947]: E0125 00:13:36.293617 4947 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Jan 25 00:13:37 crc kubenswrapper[4947]: I0125 00:13:37.166086 4947 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:37 crc kubenswrapper[4947]: I0125 00:13:37.568183 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:13:37 crc kubenswrapper[4947]: I0125 00:13:37.574619 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:13:38 crc kubenswrapper[4947]: I0125 00:13:38.113918 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:38 crc kubenswrapper[4947]: I0125 00:13:38.117214 4947 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="373dcecb-6344-4f80-99be-fca1af842f3d" Jan 25 00:13:38 crc kubenswrapper[4947]: I0125 00:13:38.189907 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:13:38 crc kubenswrapper[4947]: I0125 00:13:38.190036 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:38 crc kubenswrapper[4947]: I0125 00:13:38.190065 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:39 crc kubenswrapper[4947]: I0125 00:13:39.196965 4947 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:39 crc kubenswrapper[4947]: I0125 00:13:39.197020 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c534de12-4879-4815-adf1-b14e38021e2b" Jan 25 00:13:41 crc kubenswrapper[4947]: I0125 00:13:41.109926 4947 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="373dcecb-6344-4f80-99be-fca1af842f3d" Jan 25 00:13:45 crc kubenswrapper[4947]: I0125 00:13:45.366672 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 25 00:13:46 crc kubenswrapper[4947]: I0125 00:13:46.782898 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:13:46 crc kubenswrapper[4947]: I0125 00:13:46.885373 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 25 00:13:47 crc kubenswrapper[4947]: I0125 00:13:47.569550 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 00:13:47 crc kubenswrapper[4947]: I0125 00:13:47.668176 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 00:13:47 crc kubenswrapper[4947]: I0125 00:13:47.679180 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 25 00:13:47 crc kubenswrapper[4947]: I0125 00:13:47.716568 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 25 00:13:47 crc kubenswrapper[4947]: I0125 00:13:47.726021 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.057912 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.312441 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.364633 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.418319 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.798026 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 25 00:13:48 crc kubenswrapper[4947]: I0125 00:13:48.912759 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.056045 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.067754 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.117601 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.144802 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.154610 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.236088 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.535914 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.687507 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.730232 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.750193 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.767486 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.817092 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.875053 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 25 00:13:49 crc kubenswrapper[4947]: I0125 00:13:49.904368 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.002981 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.024069 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.024318 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.055999 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.262517 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.314802 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.315566 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.336904 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.403718 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.574959 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.600876 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.645599 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.664243 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.671873 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.719310 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.722984 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 25 00:13:50 crc kubenswrapper[4947]: I0125 00:13:50.955976 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.116782 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.187711 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.222775 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.333847 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.472627 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.526744 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.591470 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.761911 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.800915 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 25 00:13:51 crc kubenswrapper[4947]: I0125 00:13:51.989315 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.221635 4947 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.288082 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.425884 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.485815 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.493081 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.527957 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.591261 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.603737 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.617730 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.628327 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.636997 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.637040 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.667195 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.732984 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.937176 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.963617 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 25 00:13:52 crc kubenswrapper[4947]: I0125 00:13:52.984242 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.103382 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.226863 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.293739 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.370318 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.423818 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.502797 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.530870 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.636193 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.665347 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.773835 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.825036 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.856091 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 25 00:13:53 crc kubenswrapper[4947]: I0125 00:13:53.987878 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.025821 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.082730 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.108586 4947 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.175873 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.215149 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.291057 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.325254 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.394074 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.420021 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.425286 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.432993 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.439688 4947 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.443309 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=38.443282819 podStartE2EDuration="38.443282819s" podCreationTimestamp="2026-01-25 00:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:13:36.128102126 +0000 UTC m=+255.361092606" watchObservedRunningTime="2026-01-25 00:13:54.443282819 +0000 UTC m=+273.676273269" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.446468 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-dmsjj","openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.446545 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.453741 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.457029 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.461690 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.476329 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.476305814 podStartE2EDuration="17.476305814s" podCreationTimestamp="2026-01-25 00:13:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:13:54.470016764 +0000 UTC m=+273.703007204" watchObservedRunningTime="2026-01-25 00:13:54.476305814 +0000 UTC m=+273.709296254" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.496811 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.502397 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.591271 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.624283 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.713218 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 25 00:13:54 crc kubenswrapper[4947]: I0125 00:13:54.917549 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.061234 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.101339 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a733c1-a1cf-42ef-a056-27185292354f" path="/var/lib/kubelet/pods/d3a733c1-a1cf-42ef-a056-27185292354f/volumes" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.254110 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.262409 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.355545 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.394506 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.468688 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.528309 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.565381 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.657749 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.662583 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.763829 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.813304 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.813465 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.822664 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.855601 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 25 00:13:55 crc kubenswrapper[4947]: I0125 00:13:55.917297 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.077647 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.099981 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.194678 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.241570 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.338857 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.385682 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.420481 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.443144 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.452015 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.592758 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.597044 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.632223 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.661697 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.697262 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.774891 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.845651 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.857627 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.887267 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.933723 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.960486 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 25 00:13:56 crc kubenswrapper[4947]: I0125 00:13:56.974423 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.035595 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.064834 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.119719 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.151554 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.316657 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.340818 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.351818 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.386650 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.406282 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.431821 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.445164 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.596305 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.675547 4947 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.721261 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.744420 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.776666 4947 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.947569 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.961636 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.962196 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 25 00:13:57 crc kubenswrapper[4947]: I0125 00:13:57.995689 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.049749 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.112009 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.144803 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.199549 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.219010 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.258314 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.347011 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.363904 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.430054 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.430264 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.454428 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.529745 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.535571 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.549852 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.550758 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.651199 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.675256 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.676571 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.702938 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.776504 4947 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.776730 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://87af76d9cedf9765995fdba192251417d6cb96cc8dbaac5f8d89ebd77523cb24" gracePeriod=5 Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.894890 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 25 00:13:58 crc kubenswrapper[4947]: I0125 00:13:58.908966 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.003672 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.024063 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.035635 4947 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.209099 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.239259 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.349423 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.361860 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.369190 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.492565 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.502003 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.578172 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.597102 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.689113 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.696260 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.832370 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.900225 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.952412 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 25 00:13:59 crc kubenswrapper[4947]: I0125 00:13:59.960949 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.054379 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.087112 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.119602 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.302237 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.323310 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.355424 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.569936 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.576109 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.592088 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.624523 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.693186 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.795841 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.804078 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.877083 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 25 00:14:00 crc kubenswrapper[4947]: I0125 00:14:00.944434 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.046844 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.104621 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.220032 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.259460 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.416421 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.597693 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.623075 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.859823 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 25 00:14:01 crc kubenswrapper[4947]: I0125 00:14:01.984327 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.064546 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.112532 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.257514 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.263787 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.791592 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 25 00:14:02 crc kubenswrapper[4947]: I0125 00:14:02.848663 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.313237 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.554436 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.636535 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.646839 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.836368 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 25 00:14:03 crc kubenswrapper[4947]: I0125 00:14:03.853074 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.357336 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.357576 4947 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="87af76d9cedf9765995fdba192251417d6cb96cc8dbaac5f8d89ebd77523cb24" exitCode=137 Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.357616 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7349a1a4be8bfef6353d3f7647578bb21403b6708364430653b30bd7c8f97d9" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.371482 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.371581 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.406952 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.502739 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.502897 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503355 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503543 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503695 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503910 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503437 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503585 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.503960 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.504749 4947 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.504904 4947 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.505020 4947 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.505118 4947 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.514998 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:14:04 crc kubenswrapper[4947]: I0125 00:14:04.606541 4947 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.100345 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.100717 4947 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.112788 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.112842 4947 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2581c2c9-fa12-4425-9279-8b63dfe7ed94" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.115413 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.115463 4947 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2581c2c9-fa12-4425-9279-8b63dfe7ed94" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.378469 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.623934 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.624333 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" podUID="312d7f0a-f809-4e61-9ddd-46a5328b297c" containerName="controller-manager" containerID="cri-o://16bf48c77027103a40b98bc529ff47a76d184e9a4ff844a9962a12f32fb22999" gracePeriod=30 Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.629047 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:14:05 crc kubenswrapper[4947]: I0125 00:14:05.629306 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" podUID="507c2587-a4ff-48cd-8740-800f9e614c65" containerName="route-controller-manager" containerID="cri-o://767798f91b016414bc55cbaa8ecb99f3eaea02413cb909d3fa8466c832339705" gracePeriod=30 Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.389187 4947 generic.go:334] "Generic (PLEG): container finished" podID="312d7f0a-f809-4e61-9ddd-46a5328b297c" containerID="16bf48c77027103a40b98bc529ff47a76d184e9a4ff844a9962a12f32fb22999" exitCode=0 Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.389306 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" event={"ID":"312d7f0a-f809-4e61-9ddd-46a5328b297c","Type":"ContainerDied","Data":"16bf48c77027103a40b98bc529ff47a76d184e9a4ff844a9962a12f32fb22999"} Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.391311 4947 generic.go:334] "Generic (PLEG): container finished" podID="507c2587-a4ff-48cd-8740-800f9e614c65" containerID="767798f91b016414bc55cbaa8ecb99f3eaea02413cb909d3fa8466c832339705" exitCode=0 Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.391425 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" event={"ID":"507c2587-a4ff-48cd-8740-800f9e614c65","Type":"ContainerDied","Data":"767798f91b016414bc55cbaa8ecb99f3eaea02413cb909d3fa8466c832339705"} Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.632993 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.711324 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.741911 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert\") pod \"507c2587-a4ff-48cd-8740-800f9e614c65\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.742091 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88fpz\" (UniqueName: \"kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz\") pod \"507c2587-a4ff-48cd-8740-800f9e614c65\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.742153 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config\") pod \"507c2587-a4ff-48cd-8740-800f9e614c65\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.742236 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca\") pod \"507c2587-a4ff-48cd-8740-800f9e614c65\" (UID: \"507c2587-a4ff-48cd-8740-800f9e614c65\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.743825 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca" (OuterVolumeSpecName: "client-ca") pod "507c2587-a4ff-48cd-8740-800f9e614c65" (UID: "507c2587-a4ff-48cd-8740-800f9e614c65"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.748965 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config" (OuterVolumeSpecName: "config") pod "507c2587-a4ff-48cd-8740-800f9e614c65" (UID: "507c2587-a4ff-48cd-8740-800f9e614c65"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.750526 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "507c2587-a4ff-48cd-8740-800f9e614c65" (UID: "507c2587-a4ff-48cd-8740-800f9e614c65"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.753317 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz" (OuterVolumeSpecName: "kube-api-access-88fpz") pod "507c2587-a4ff-48cd-8740-800f9e614c65" (UID: "507c2587-a4ff-48cd-8740-800f9e614c65"). InnerVolumeSpecName "kube-api-access-88fpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.843724 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles\") pod \"312d7f0a-f809-4e61-9ddd-46a5328b297c\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.844039 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config\") pod \"312d7f0a-f809-4e61-9ddd-46a5328b297c\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.844276 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca\") pod \"312d7f0a-f809-4e61-9ddd-46a5328b297c\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.845176 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w7j5\" (UniqueName: \"kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5\") pod \"312d7f0a-f809-4e61-9ddd-46a5328b297c\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.845079 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca" (OuterVolumeSpecName: "client-ca") pod "312d7f0a-f809-4e61-9ddd-46a5328b297c" (UID: "312d7f0a-f809-4e61-9ddd-46a5328b297c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.845161 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config" (OuterVolumeSpecName: "config") pod "312d7f0a-f809-4e61-9ddd-46a5328b297c" (UID: "312d7f0a-f809-4e61-9ddd-46a5328b297c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.845168 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "312d7f0a-f809-4e61-9ddd-46a5328b297c" (UID: "312d7f0a-f809-4e61-9ddd-46a5328b297c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.845746 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert\") pod \"312d7f0a-f809-4e61-9ddd-46a5328b297c\" (UID: \"312d7f0a-f809-4e61-9ddd-46a5328b297c\") " Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.846563 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.846743 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/507c2587-a4ff-48cd-8740-800f9e614c65-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.846850 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.847041 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88fpz\" (UniqueName: \"kubernetes.io/projected/507c2587-a4ff-48cd-8740-800f9e614c65-kube-api-access-88fpz\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.847372 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.847476 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312d7f0a-f809-4e61-9ddd-46a5328b297c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.847559 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/507c2587-a4ff-48cd-8740-800f9e614c65-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.849045 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5" (OuterVolumeSpecName: "kube-api-access-9w7j5") pod "312d7f0a-f809-4e61-9ddd-46a5328b297c" (UID: "312d7f0a-f809-4e61-9ddd-46a5328b297c"). InnerVolumeSpecName "kube-api-access-9w7j5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.849366 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "312d7f0a-f809-4e61-9ddd-46a5328b297c" (UID: "312d7f0a-f809-4e61-9ddd-46a5328b297c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.948679 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w7j5\" (UniqueName: \"kubernetes.io/projected/312d7f0a-f809-4e61-9ddd-46a5328b297c-kube-api-access-9w7j5\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:06 crc kubenswrapper[4947]: I0125 00:14:06.948732 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312d7f0a-f809-4e61-9ddd-46a5328b297c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.256190 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:07 crc kubenswrapper[4947]: E0125 00:14:07.256713 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a733c1-a1cf-42ef-a056-27185292354f" containerName="oauth-openshift" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.256757 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a733c1-a1cf-42ef-a056-27185292354f" containerName="oauth-openshift" Jan 25 00:14:07 crc kubenswrapper[4947]: E0125 00:14:07.256801 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312d7f0a-f809-4e61-9ddd-46a5328b297c" containerName="controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.256821 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="312d7f0a-f809-4e61-9ddd-46a5328b297c" containerName="controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: E0125 00:14:07.256847 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="507c2587-a4ff-48cd-8740-800f9e614c65" containerName="route-controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.256867 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="507c2587-a4ff-48cd-8740-800f9e614c65" containerName="route-controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: E0125 00:14:07.256899 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.256917 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 25 00:14:07 crc kubenswrapper[4947]: E0125 00:14:07.256944 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" containerName="installer" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258064 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" containerName="installer" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258380 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="312d7f0a-f809-4e61-9ddd-46a5328b297c" containerName="controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258431 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258459 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="3305e0ba-7064-415c-bbaa-bdc630d95e40" containerName="installer" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258492 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a733c1-a1cf-42ef-a056-27185292354f" containerName="oauth-openshift" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.258520 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="507c2587-a4ff-48cd-8740-800f9e614c65" containerName="route-controller-manager" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.259263 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.272626 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.353908 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.353983 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.354030 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.354066 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbndx\" (UniqueName: \"kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.402429 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" event={"ID":"507c2587-a4ff-48cd-8740-800f9e614c65","Type":"ContainerDied","Data":"e3712692a024a59d8c1f6eeedf100b94d93a2bc93180129c8667faed18062f64"} Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.402448 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.402565 4947 scope.go:117] "RemoveContainer" containerID="767798f91b016414bc55cbaa8ecb99f3eaea02413cb909d3fa8466c832339705" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.405708 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" event={"ID":"312d7f0a-f809-4e61-9ddd-46a5328b297c","Type":"ContainerDied","Data":"d19e2f3886646df15934b672a149e21934ab3e0f233ab59f1b6c04edeba54de9"} Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.406471 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bf7b4c587-6xl8t" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.429252 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.435475 4947 scope.go:117] "RemoveContainer" containerID="16bf48c77027103a40b98bc529ff47a76d184e9a4ff844a9962a12f32fb22999" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.446045 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bf7b4c587-6xl8t"] Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.454189 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.455252 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.455319 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbndx\" (UniqueName: \"kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.455370 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.455442 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.457236 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.457887 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.461488 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.465780 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785b84857-vwtg7"] Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.484824 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbndx\" (UniqueName: \"kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx\") pod \"route-controller-manager-69f6879854-4jdh7\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:07 crc kubenswrapper[4947]: I0125 00:14:07.585312 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.050576 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.416741 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" event={"ID":"224644f1-a5e3-4fa5-8c1c-97030c1796c5","Type":"ContainerStarted","Data":"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272"} Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.416814 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" event={"ID":"224644f1-a5e3-4fa5-8c1c-97030c1796c5","Type":"ContainerStarted","Data":"a2cf4520fc74b19c2f49a3aa7b17652852b6f0732aaacfb718e26a7117d8c7ee"} Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.417940 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.803116 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:08 crc kubenswrapper[4947]: I0125 00:14:08.839108 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" podStartSLOduration=3.83907706 podStartE2EDuration="3.83907706s" podCreationTimestamp="2026-01-25 00:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:08.442673852 +0000 UTC m=+287.675664332" watchObservedRunningTime="2026-01-25 00:14:08.83907706 +0000 UTC m=+288.072067540" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.105636 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312d7f0a-f809-4e61-9ddd-46a5328b297c" path="/var/lib/kubelet/pods/312d7f0a-f809-4e61-9ddd-46a5328b297c/volumes" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.106763 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="507c2587-a4ff-48cd-8740-800f9e614c65" path="/var/lib/kubelet/pods/507c2587-a4ff-48cd-8740-800f9e614c65/volumes" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.252090 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.253290 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.257385 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.257479 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.258375 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.259018 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.260756 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.265210 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.270410 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.273651 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.408850 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.408933 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.409117 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.409231 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.409297 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xksg2\" (UniqueName: \"kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.510424 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.510481 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.510579 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.510624 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.510661 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xksg2\" (UniqueName: \"kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.512853 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.514116 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.515349 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.522854 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.539931 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xksg2\" (UniqueName: \"kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2\") pod \"controller-manager-6cb4bd4595-qvv49\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.585711 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:09 crc kubenswrapper[4947]: I0125 00:14:09.831876 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:09 crc kubenswrapper[4947]: W0125 00:14:09.840000 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38b86fbe_129d_444e_b66d_d5cfcdfe502d.slice/crio-559e97a56a39b8472fae5df4f9cc2883d13427ca4056db050883156ced694882 WatchSource:0}: Error finding container 559e97a56a39b8472fae5df4f9cc2883d13427ca4056db050883156ced694882: Status 404 returned error can't find the container with id 559e97a56a39b8472fae5df4f9cc2883d13427ca4056db050883156ced694882 Jan 25 00:14:10 crc kubenswrapper[4947]: I0125 00:14:10.433787 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" event={"ID":"38b86fbe-129d-444e-b66d-d5cfcdfe502d","Type":"ContainerStarted","Data":"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a"} Jan 25 00:14:10 crc kubenswrapper[4947]: I0125 00:14:10.433877 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" event={"ID":"38b86fbe-129d-444e-b66d-d5cfcdfe502d","Type":"ContainerStarted","Data":"559e97a56a39b8472fae5df4f9cc2883d13427ca4056db050883156ced694882"} Jan 25 00:14:10 crc kubenswrapper[4947]: I0125 00:14:10.434191 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:10 crc kubenswrapper[4947]: I0125 00:14:10.440523 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:10 crc kubenswrapper[4947]: I0125 00:14:10.474093 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" podStartSLOduration=5.474061097 podStartE2EDuration="5.474061097s" podCreationTimestamp="2026-01-25 00:14:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:10.454757843 +0000 UTC m=+289.687748293" watchObservedRunningTime="2026-01-25 00:14:10.474061097 +0000 UTC m=+289.707051567" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.251228 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-db689bc6b-lvd9p"] Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.252305 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.263521 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.263621 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.263871 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.265986 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.266394 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.266608 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.266684 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.266823 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.266856 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.267052 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.268812 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.270375 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.271400 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-db689bc6b-lvd9p"] Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.317557 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.318871 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.322557 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343565 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-login\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343605 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-service-ca\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343632 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-session\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343651 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343665 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt4n5\" (UniqueName: \"kubernetes.io/projected/c5bbb081-ae69-4597-980e-2163cb2b1208-kube-api-access-vt4n5\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343685 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-policies\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343712 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-error\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343736 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343752 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-router-certs\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343774 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343795 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-dir\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343816 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-cliconfig\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343831 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.343849 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-serving-cert\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445201 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-session\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445240 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt4n5\" (UniqueName: \"kubernetes.io/projected/c5bbb081-ae69-4597-980e-2163cb2b1208-kube-api-access-vt4n5\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445265 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445293 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-policies\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445331 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-error\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445364 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445386 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-router-certs\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445420 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-dir\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445474 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-cliconfig\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445497 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445524 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-serving-cert\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445553 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-login\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.445583 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-service-ca\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.446070 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-dir\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.448601 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-audit-policies\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.448753 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-cliconfig\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.449219 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.449554 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-service-ca\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.454538 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-router-certs\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.456983 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.460215 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-session\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.460354 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.460619 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-serving-cert\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.461189 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-error\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.461902 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.462475 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c5bbb081-ae69-4597-980e-2163cb2b1208-v4-0-config-user-template-login\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.466593 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt4n5\" (UniqueName: \"kubernetes.io/projected/c5bbb081-ae69-4597-980e-2163cb2b1208-kube-api-access-vt4n5\") pod \"oauth-openshift-db689bc6b-lvd9p\" (UID: \"c5bbb081-ae69-4597-980e-2163cb2b1208\") " pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.623988 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:11 crc kubenswrapper[4947]: I0125 00:14:11.868824 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-db689bc6b-lvd9p"] Jan 25 00:14:11 crc kubenswrapper[4947]: W0125 00:14:11.876554 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc5bbb081_ae69_4597_980e_2163cb2b1208.slice/crio-98969cd876455b78cfd35c9d783acac8e24c560e8feb314cb6beca37001ffe3c WatchSource:0}: Error finding container 98969cd876455b78cfd35c9d783acac8e24c560e8feb314cb6beca37001ffe3c: Status 404 returned error can't find the container with id 98969cd876455b78cfd35c9d783acac8e24c560e8feb314cb6beca37001ffe3c Jan 25 00:14:12 crc kubenswrapper[4947]: I0125 00:14:12.445996 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" event={"ID":"c5bbb081-ae69-4597-980e-2163cb2b1208","Type":"ContainerStarted","Data":"94f1cd3955178649dfbab7bf5027e159b4e1e3f66d3d5dce5543cbd66c0643dd"} Jan 25 00:14:12 crc kubenswrapper[4947]: I0125 00:14:12.446050 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" event={"ID":"c5bbb081-ae69-4597-980e-2163cb2b1208","Type":"ContainerStarted","Data":"98969cd876455b78cfd35c9d783acac8e24c560e8feb314cb6beca37001ffe3c"} Jan 25 00:14:12 crc kubenswrapper[4947]: I0125 00:14:12.466463 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" podStartSLOduration=64.466442463 podStartE2EDuration="1m4.466442463s" podCreationTimestamp="2026-01-25 00:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:12.464605947 +0000 UTC m=+291.697596387" watchObservedRunningTime="2026-01-25 00:14:12.466442463 +0000 UTC m=+291.699432903" Jan 25 00:14:13 crc kubenswrapper[4947]: I0125 00:14:13.452527 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:13 crc kubenswrapper[4947]: I0125 00:14:13.459727 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-db689bc6b-lvd9p" Jan 25 00:14:18 crc kubenswrapper[4947]: I0125 00:14:18.485179 4947 generic.go:334] "Generic (PLEG): container finished" podID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerID="7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369" exitCode=0 Jan 25 00:14:18 crc kubenswrapper[4947]: I0125 00:14:18.485273 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerDied","Data":"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369"} Jan 25 00:14:18 crc kubenswrapper[4947]: I0125 00:14:18.486330 4947 scope.go:117] "RemoveContainer" containerID="7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369" Jan 25 00:14:19 crc kubenswrapper[4947]: I0125 00:14:19.496743 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerStarted","Data":"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e"} Jan 25 00:14:19 crc kubenswrapper[4947]: I0125 00:14:19.498040 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:14:19 crc kubenswrapper[4947]: I0125 00:14:19.500853 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:14:20 crc kubenswrapper[4947]: I0125 00:14:20.926456 4947 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 25 00:14:25 crc kubenswrapper[4947]: I0125 00:14:25.558890 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:25 crc kubenswrapper[4947]: I0125 00:14:25.559414 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" podUID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" containerName="controller-manager" containerID="cri-o://3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a" gracePeriod=30 Jan 25 00:14:25 crc kubenswrapper[4947]: I0125 00:14:25.575442 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:25 crc kubenswrapper[4947]: I0125 00:14:25.575688 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" podUID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" containerName="route-controller-manager" containerID="cri-o://de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272" gracePeriod=30 Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.141336 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.196949 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.274493 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca\") pod \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.274620 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert\") pod \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.274725 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config\") pod \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.274937 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbndx\" (UniqueName: \"kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx\") pod \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\" (UID: \"224644f1-a5e3-4fa5-8c1c-97030c1796c5\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.275325 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca" (OuterVolumeSpecName: "client-ca") pod "224644f1-a5e3-4fa5-8c1c-97030c1796c5" (UID: "224644f1-a5e3-4fa5-8c1c-97030c1796c5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.275707 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config" (OuterVolumeSpecName: "config") pod "224644f1-a5e3-4fa5-8c1c-97030c1796c5" (UID: "224644f1-a5e3-4fa5-8c1c-97030c1796c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.280963 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "224644f1-a5e3-4fa5-8c1c-97030c1796c5" (UID: "224644f1-a5e3-4fa5-8c1c-97030c1796c5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.282955 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx" (OuterVolumeSpecName: "kube-api-access-vbndx") pod "224644f1-a5e3-4fa5-8c1c-97030c1796c5" (UID: "224644f1-a5e3-4fa5-8c1c-97030c1796c5"). InnerVolumeSpecName "kube-api-access-vbndx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.376626 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xksg2\" (UniqueName: \"kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2\") pod \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.376758 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca\") pod \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.376802 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config\") pod \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.376870 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert\") pod \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.376950 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles\") pod \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\" (UID: \"38b86fbe-129d-444e-b66d-d5cfcdfe502d\") " Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.377451 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/224644f1-a5e3-4fa5-8c1c-97030c1796c5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.377504 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.377532 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbndx\" (UniqueName: \"kubernetes.io/projected/224644f1-a5e3-4fa5-8c1c-97030c1796c5-kube-api-access-vbndx\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.377558 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/224644f1-a5e3-4fa5-8c1c-97030c1796c5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.378559 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "38b86fbe-129d-444e-b66d-d5cfcdfe502d" (UID: "38b86fbe-129d-444e-b66d-d5cfcdfe502d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.378586 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca" (OuterVolumeSpecName: "client-ca") pod "38b86fbe-129d-444e-b66d-d5cfcdfe502d" (UID: "38b86fbe-129d-444e-b66d-d5cfcdfe502d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.378667 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config" (OuterVolumeSpecName: "config") pod "38b86fbe-129d-444e-b66d-d5cfcdfe502d" (UID: "38b86fbe-129d-444e-b66d-d5cfcdfe502d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.381120 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "38b86fbe-129d-444e-b66d-d5cfcdfe502d" (UID: "38b86fbe-129d-444e-b66d-d5cfcdfe502d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.381423 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2" (OuterVolumeSpecName: "kube-api-access-xksg2") pod "38b86fbe-129d-444e-b66d-d5cfcdfe502d" (UID: "38b86fbe-129d-444e-b66d-d5cfcdfe502d"). InnerVolumeSpecName "kube-api-access-xksg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.479611 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38b86fbe-129d-444e-b66d-d5cfcdfe502d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.479696 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.479723 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xksg2\" (UniqueName: \"kubernetes.io/projected/38b86fbe-129d-444e-b66d-d5cfcdfe502d-kube-api-access-xksg2\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.479744 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.479765 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38b86fbe-129d-444e-b66d-d5cfcdfe502d-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.548996 4947 generic.go:334] "Generic (PLEG): container finished" podID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" containerID="3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a" exitCode=0 Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.549225 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" event={"ID":"38b86fbe-129d-444e-b66d-d5cfcdfe502d","Type":"ContainerDied","Data":"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a"} Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.549248 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.549272 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-qvv49" event={"ID":"38b86fbe-129d-444e-b66d-d5cfcdfe502d","Type":"ContainerDied","Data":"559e97a56a39b8472fae5df4f9cc2883d13427ca4056db050883156ced694882"} Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.549314 4947 scope.go:117] "RemoveContainer" containerID="3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.559971 4947 generic.go:334] "Generic (PLEG): container finished" podID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" containerID="de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272" exitCode=0 Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.560032 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" event={"ID":"224644f1-a5e3-4fa5-8c1c-97030c1796c5","Type":"ContainerDied","Data":"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272"} Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.560068 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" event={"ID":"224644f1-a5e3-4fa5-8c1c-97030c1796c5","Type":"ContainerDied","Data":"a2cf4520fc74b19c2f49a3aa7b17652852b6f0732aaacfb718e26a7117d8c7ee"} Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.560203 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.598415 4947 scope.go:117] "RemoveContainer" containerID="3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a" Jan 25 00:14:26 crc kubenswrapper[4947]: E0125 00:14:26.599059 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a\": container with ID starting with 3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a not found: ID does not exist" containerID="3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.599101 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a"} err="failed to get container status \"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a\": rpc error: code = NotFound desc = could not find container \"3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a\": container with ID starting with 3469a54ff398bc64c543cc9ff2040987e444482e29fde28bd78f015129ddf44a not found: ID does not exist" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.599158 4947 scope.go:117] "RemoveContainer" containerID="de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.600547 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.606676 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-qvv49"] Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.615464 4947 scope.go:117] "RemoveContainer" containerID="de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.616082 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:26 crc kubenswrapper[4947]: E0125 00:14:26.616263 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272\": container with ID starting with de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272 not found: ID does not exist" containerID="de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.616320 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272"} err="failed to get container status \"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272\": rpc error: code = NotFound desc = could not find container \"de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272\": container with ID starting with de4c2ecf8eb9a56a9d6154251e9f5d91fab263e83e80b6d2c8953f5975d31272 not found: ID does not exist" Jan 25 00:14:26 crc kubenswrapper[4947]: I0125 00:14:26.620642 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-4jdh7"] Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.100877 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" path="/var/lib/kubelet/pods/224644f1-a5e3-4fa5-8c1c-97030c1796c5/volumes" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.102089 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" path="/var/lib/kubelet/pods/38b86fbe-129d-444e-b66d-d5cfcdfe502d/volumes" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.262067 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:27 crc kubenswrapper[4947]: E0125 00:14:27.262960 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" containerName="controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.262998 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" containerName="controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: E0125 00:14:27.263035 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" containerName="route-controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.263052 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" containerName="route-controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.263277 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="224644f1-a5e3-4fa5-8c1c-97030c1796c5" containerName="route-controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.263305 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b86fbe-129d-444e-b66d-d5cfcdfe502d" containerName="controller-manager" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.263954 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.267000 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.267381 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.267757 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.268335 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.268418 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.268469 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.272929 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.274191 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.276983 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.279176 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.279366 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.279581 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.279646 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.280025 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.282906 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292070 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292183 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292248 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292310 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292352 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292446 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mscq\" (UniqueName: \"kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292617 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf76f\" (UniqueName: \"kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292723 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292793 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.292874 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.298909 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393333 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393393 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393426 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393463 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393487 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393510 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mscq\" (UniqueName: \"kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393537 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf76f\" (UniqueName: \"kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393593 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.393618 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.394805 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.395341 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.395927 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.396002 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.396715 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.402817 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.403996 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.424969 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf76f\" (UniqueName: \"kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f\") pod \"route-controller-manager-5c8f48f598-mnljf\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.427987 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mscq\" (UniqueName: \"kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq\") pod \"controller-manager-5b9d4449c6-5zzzc\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.593860 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.619218 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.868242 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:14:27 crc kubenswrapper[4947]: W0125 00:14:27.873784 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02b69aab_7cbd_4f58_8756_c1c5b615c33d.slice/crio-38a5c9cb609e714547362a4cc611ea212282d2141fd9223ceadf053b5b3b5a11 WatchSource:0}: Error finding container 38a5c9cb609e714547362a4cc611ea212282d2141fd9223ceadf053b5b3b5a11: Status 404 returned error can't find the container with id 38a5c9cb609e714547362a4cc611ea212282d2141fd9223ceadf053b5b3b5a11 Jan 25 00:14:27 crc kubenswrapper[4947]: I0125 00:14:27.910268 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:27 crc kubenswrapper[4947]: W0125 00:14:27.915147 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod563d5ba6_16c8_4a0e_8f80_6e0f76eac00e.slice/crio-501e260a4d828269ffd7f38ce4c58069d90dbc9c7273eb6c4c277594a413682e WatchSource:0}: Error finding container 501e260a4d828269ffd7f38ce4c58069d90dbc9c7273eb6c4c277594a413682e: Status 404 returned error can't find the container with id 501e260a4d828269ffd7f38ce4c58069d90dbc9c7273eb6c4c277594a413682e Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.576317 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" event={"ID":"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e","Type":"ContainerStarted","Data":"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc"} Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.576703 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.576720 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" event={"ID":"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e","Type":"ContainerStarted","Data":"501e260a4d828269ffd7f38ce4c58069d90dbc9c7273eb6c4c277594a413682e"} Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.577936 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" event={"ID":"02b69aab-7cbd-4f58-8756-c1c5b615c33d","Type":"ContainerStarted","Data":"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea"} Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.577967 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" event={"ID":"02b69aab-7cbd-4f58-8756-c1c5b615c33d","Type":"ContainerStarted","Data":"38a5c9cb609e714547362a4cc611ea212282d2141fd9223ceadf053b5b3b5a11"} Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.578202 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.582837 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.593926 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" podStartSLOduration=3.593909113 podStartE2EDuration="3.593909113s" podCreationTimestamp="2026-01-25 00:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:28.59380854 +0000 UTC m=+307.826799000" watchObservedRunningTime="2026-01-25 00:14:28.593909113 +0000 UTC m=+307.826899553" Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.614032 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" podStartSLOduration=3.614011778 podStartE2EDuration="3.614011778s" podCreationTimestamp="2026-01-25 00:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:28.610755314 +0000 UTC m=+307.843745754" watchObservedRunningTime="2026-01-25 00:14:28.614011778 +0000 UTC m=+307.847002218" Jan 25 00:14:28 crc kubenswrapper[4947]: I0125 00:14:28.623293 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:45 crc kubenswrapper[4947]: I0125 00:14:45.767251 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:45 crc kubenswrapper[4947]: I0125 00:14:45.769507 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" podUID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" containerName="route-controller-manager" containerID="cri-o://849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc" gracePeriod=30 Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.282030 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.393493 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca\") pod \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.393988 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf76f\" (UniqueName: \"kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f\") pod \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.394047 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config\") pod \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.394168 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert\") pod \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\" (UID: \"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e\") " Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.394976 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca" (OuterVolumeSpecName: "client-ca") pod "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" (UID: "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.395035 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config" (OuterVolumeSpecName: "config") pod "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" (UID: "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.399641 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" (UID: "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.399964 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f" (OuterVolumeSpecName: "kube-api-access-bf76f") pod "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" (UID: "563d5ba6-16c8-4a0e-8f80-6e0f76eac00e"). InnerVolumeSpecName "kube-api-access-bf76f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.495728 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.495776 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.495787 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf76f\" (UniqueName: \"kubernetes.io/projected/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-kube-api-access-bf76f\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.495796 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.696217 4947 generic.go:334] "Generic (PLEG): container finished" podID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" containerID="849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc" exitCode=0 Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.696278 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" event={"ID":"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e","Type":"ContainerDied","Data":"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc"} Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.696291 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.696320 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf" event={"ID":"563d5ba6-16c8-4a0e-8f80-6e0f76eac00e","Type":"ContainerDied","Data":"501e260a4d828269ffd7f38ce4c58069d90dbc9c7273eb6c4c277594a413682e"} Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.696346 4947 scope.go:117] "RemoveContainer" containerID="849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.724649 4947 scope.go:117] "RemoveContainer" containerID="849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc" Jan 25 00:14:46 crc kubenswrapper[4947]: E0125 00:14:46.725172 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc\": container with ID starting with 849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc not found: ID does not exist" containerID="849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.725203 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc"} err="failed to get container status \"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc\": rpc error: code = NotFound desc = could not find container \"849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc\": container with ID starting with 849cd7d5924c30ac3f086277a35aef4c9b05af412e8914f3d23a08bcce0b23fc not found: ID does not exist" Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.732719 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:46 crc kubenswrapper[4947]: I0125 00:14:46.741752 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5c8f48f598-mnljf"] Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.072928 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.073012 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.102833 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" path="/var/lib/kubelet/pods/563d5ba6-16c8-4a0e-8f80-6e0f76eac00e/volumes" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.295298 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c"] Jan 25 00:14:47 crc kubenswrapper[4947]: E0125 00:14:47.295698 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" containerName="route-controller-manager" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.295727 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" containerName="route-controller-manager" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.296001 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="563d5ba6-16c8-4a0e-8f80-6e0f76eac00e" containerName="route-controller-manager" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.296772 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.299295 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.300115 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.300343 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.303362 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.307605 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.307741 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.310990 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c"] Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.409361 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpz7q\" (UniqueName: \"kubernetes.io/projected/13a2325d-775c-43ea-8a53-3011854a5878-kube-api-access-wpz7q\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.409512 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-client-ca\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.410038 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-config\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.410117 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a2325d-775c-43ea-8a53-3011854a5878-serving-cert\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.511404 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-client-ca\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.511506 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-config\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.511564 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a2325d-775c-43ea-8a53-3011854a5878-serving-cert\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.511664 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpz7q\" (UniqueName: \"kubernetes.io/projected/13a2325d-775c-43ea-8a53-3011854a5878-kube-api-access-wpz7q\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.513042 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-client-ca\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.513195 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13a2325d-775c-43ea-8a53-3011854a5878-config\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.524071 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/13a2325d-775c-43ea-8a53-3011854a5878-serving-cert\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.559960 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpz7q\" (UniqueName: \"kubernetes.io/projected/13a2325d-775c-43ea-8a53-3011854a5878-kube-api-access-wpz7q\") pod \"route-controller-manager-69f6879854-z2b5c\" (UID: \"13a2325d-775c-43ea-8a53-3011854a5878\") " pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:47 crc kubenswrapper[4947]: I0125 00:14:47.620790 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.066225 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c"] Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.711879 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" event={"ID":"13a2325d-775c-43ea-8a53-3011854a5878","Type":"ContainerStarted","Data":"4f7167e8bc56ea2015f74a4a9155f08d29d52a53ff9dabdf6b3467b80213d58c"} Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.712267 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.712284 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" event={"ID":"13a2325d-775c-43ea-8a53-3011854a5878","Type":"ContainerStarted","Data":"c493ad91366d94153354e4bd54c75ea5dd6f48d5a543429af74426b61243ed0b"} Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.722332 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" Jan 25 00:14:48 crc kubenswrapper[4947]: I0125 00:14:48.733221 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-69f6879854-z2b5c" podStartSLOduration=3.733197864 podStartE2EDuration="3.733197864s" podCreationTimestamp="2026-01-25 00:14:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:14:48.729805707 +0000 UTC m=+327.962796197" watchObservedRunningTime="2026-01-25 00:14:48.733197864 +0000 UTC m=+327.966188354" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.176989 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn"] Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.178704 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.180674 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.184595 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.191720 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn"] Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.285527 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.285827 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.285965 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxks6\" (UniqueName: \"kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.386854 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxks6\" (UniqueName: \"kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.386918 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.386948 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.387912 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.395787 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.403515 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxks6\" (UniqueName: \"kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6\") pod \"collect-profiles-29488335-hfmmn\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.505254 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:00 crc kubenswrapper[4947]: I0125 00:15:00.921885 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn"] Jan 25 00:15:00 crc kubenswrapper[4947]: W0125 00:15:00.931251 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea9619f3_0314_493b_8fac_ab4d927cb2be.slice/crio-475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541 WatchSource:0}: Error finding container 475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541: Status 404 returned error can't find the container with id 475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541 Jan 25 00:15:01 crc kubenswrapper[4947]: I0125 00:15:01.797985 4947 generic.go:334] "Generic (PLEG): container finished" podID="ea9619f3-0314-493b-8fac-ab4d927cb2be" containerID="24c5fcd7011a72bae7e895ae2163482a8b930148e6544cfb54bf6c32060f0397" exitCode=0 Jan 25 00:15:01 crc kubenswrapper[4947]: I0125 00:15:01.798121 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" event={"ID":"ea9619f3-0314-493b-8fac-ab4d927cb2be","Type":"ContainerDied","Data":"24c5fcd7011a72bae7e895ae2163482a8b930148e6544cfb54bf6c32060f0397"} Jan 25 00:15:01 crc kubenswrapper[4947]: I0125 00:15:01.798590 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" event={"ID":"ea9619f3-0314-493b-8fac-ab4d927cb2be","Type":"ContainerStarted","Data":"475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541"} Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.224837 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.333457 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume\") pod \"ea9619f3-0314-493b-8fac-ab4d927cb2be\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.333612 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume\") pod \"ea9619f3-0314-493b-8fac-ab4d927cb2be\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.333771 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxks6\" (UniqueName: \"kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6\") pod \"ea9619f3-0314-493b-8fac-ab4d927cb2be\" (UID: \"ea9619f3-0314-493b-8fac-ab4d927cb2be\") " Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.335398 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume" (OuterVolumeSpecName: "config-volume") pod "ea9619f3-0314-493b-8fac-ab4d927cb2be" (UID: "ea9619f3-0314-493b-8fac-ab4d927cb2be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.342783 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6" (OuterVolumeSpecName: "kube-api-access-hxks6") pod "ea9619f3-0314-493b-8fac-ab4d927cb2be" (UID: "ea9619f3-0314-493b-8fac-ab4d927cb2be"). InnerVolumeSpecName "kube-api-access-hxks6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.343295 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ea9619f3-0314-493b-8fac-ab4d927cb2be" (UID: "ea9619f3-0314-493b-8fac-ab4d927cb2be"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.435742 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ea9619f3-0314-493b-8fac-ab4d927cb2be-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.435802 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxks6\" (UniqueName: \"kubernetes.io/projected/ea9619f3-0314-493b-8fac-ab4d927cb2be-kube-api-access-hxks6\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.435818 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9619f3-0314-493b-8fac-ab4d927cb2be-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.816073 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" event={"ID":"ea9619f3-0314-493b-8fac-ab4d927cb2be","Type":"ContainerDied","Data":"475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541"} Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.816172 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488335-hfmmn" Jan 25 00:15:03 crc kubenswrapper[4947]: I0125 00:15:03.816198 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="475d383198c34fac1d7b4be7345f00f6e1ed79707f478a21b19eca643f099541" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.643625 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bh4mm"] Jan 25 00:15:10 crc kubenswrapper[4947]: E0125 00:15:10.644412 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea9619f3-0314-493b-8fac-ab4d927cb2be" containerName="collect-profiles" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.644426 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea9619f3-0314-493b-8fac-ab4d927cb2be" containerName="collect-profiles" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.644538 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea9619f3-0314-493b-8fac-ab4d927cb2be" containerName="collect-profiles" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.644946 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.660386 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bh4mm"] Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750771 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750838 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-tls\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750868 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-trusted-ca\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750892 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/71aa1854-f5dd-4aef-a80a-0121225c19d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750925 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp5hl\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-kube-api-access-fp5hl\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750944 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-certificates\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.750982 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-bound-sa-token\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.751017 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/71aa1854-f5dd-4aef-a80a-0121225c19d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.802153 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/71aa1854-f5dd-4aef-a80a-0121225c19d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852262 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp5hl\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-kube-api-access-fp5hl\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852286 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-certificates\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852321 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-bound-sa-token\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852335 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/71aa1854-f5dd-4aef-a80a-0121225c19d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852375 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-tls\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.852392 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-trusted-ca\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.853268 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/71aa1854-f5dd-4aef-a80a-0121225c19d8-ca-trust-extracted\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.854346 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-certificates\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.854378 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71aa1854-f5dd-4aef-a80a-0121225c19d8-trusted-ca\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.869051 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/71aa1854-f5dd-4aef-a80a-0121225c19d8-installation-pull-secrets\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.872080 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp5hl\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-kube-api-access-fp5hl\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.873878 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-bound-sa-token\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.874193 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/71aa1854-f5dd-4aef-a80a-0121225c19d8-registry-tls\") pod \"image-registry-66df7c8f76-bh4mm\" (UID: \"71aa1854-f5dd-4aef-a80a-0121225c19d8\") " pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:10 crc kubenswrapper[4947]: I0125 00:15:10.971844 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:11 crc kubenswrapper[4947]: I0125 00:15:11.491868 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-bh4mm"] Jan 25 00:15:11 crc kubenswrapper[4947]: I0125 00:15:11.879895 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" event={"ID":"71aa1854-f5dd-4aef-a80a-0121225c19d8","Type":"ContainerStarted","Data":"809fda81fa0dcf1ee74bb5b7f70fc3d1f84e5139fa0f867f993b60eab1295fdc"} Jan 25 00:15:11 crc kubenswrapper[4947]: I0125 00:15:11.879938 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" event={"ID":"71aa1854-f5dd-4aef-a80a-0121225c19d8","Type":"ContainerStarted","Data":"d76c6cee3736aac6763294b3f629534a4a8f55d33317f67b29ab01e0c2161eb4"} Jan 25 00:15:11 crc kubenswrapper[4947]: I0125 00:15:11.880141 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:11 crc kubenswrapper[4947]: I0125 00:15:11.900749 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" podStartSLOduration=1.900730814 podStartE2EDuration="1.900730814s" podCreationTimestamp="2026-01-25 00:15:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:15:11.895832167 +0000 UTC m=+351.128822627" watchObservedRunningTime="2026-01-25 00:15:11.900730814 +0000 UTC m=+351.133721264" Jan 25 00:15:17 crc kubenswrapper[4947]: I0125 00:15:17.072661 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:15:17 crc kubenswrapper[4947]: I0125 00:15:17.073244 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.244585 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.245552 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qzj76" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="registry-server" containerID="cri-o://625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1" gracePeriod=30 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.261973 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.262699 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-47m2l" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="registry-server" containerID="cri-o://d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289" gracePeriod=30 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.283316 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.289269 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.289574 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" containerID="cri-o://c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e" gracePeriod=30 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.289735 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wwwnp" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="registry-server" containerID="cri-o://8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b" gracePeriod=30 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.315690 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.316029 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ltw77" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="registry-server" containerID="cri-o://482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3" gracePeriod=30 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.323655 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbj6z"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.324408 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.335728 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbj6z"] Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.463965 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.464017 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.464049 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpnjk\" (UniqueName: \"kubernetes.io/projected/94a09856-1120-4003-a601-ee3c9121eb51-kube-api-access-kpnjk\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.565014 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.565073 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpnjk\" (UniqueName: \"kubernetes.io/projected/94a09856-1120-4003-a601-ee3c9121eb51-kube-api-access-kpnjk\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.565196 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.566816 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.575834 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/94a09856-1120-4003-a601-ee3c9121eb51-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.583775 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpnjk\" (UniqueName: \"kubernetes.io/projected/94a09856-1120-4003-a601-ee3c9121eb51-kube-api-access-kpnjk\") pod \"marketplace-operator-79b997595-mbj6z\" (UID: \"94a09856-1120-4003-a601-ee3c9121eb51\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.763905 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.763969 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.812942 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.826512 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.859152 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.883869 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content\") pod \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.883916 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities\") pod \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.884530 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxzk6\" (UniqueName: \"kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6\") pod \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\" (UID: \"900aeb01-050c-45b8-936c-e5f8d73ebeb5\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.889502 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities" (OuterVolumeSpecName: "utilities") pod "900aeb01-050c-45b8-936c-e5f8d73ebeb5" (UID: "900aeb01-050c-45b8-936c-e5f8d73ebeb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.890148 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6" (OuterVolumeSpecName: "kube-api-access-vxzk6") pod "900aeb01-050c-45b8-936c-e5f8d73ebeb5" (UID: "900aeb01-050c-45b8-936c-e5f8d73ebeb5"). InnerVolumeSpecName "kube-api-access-vxzk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.925715 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.980096 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "900aeb01-050c-45b8-936c-e5f8d73ebeb5" (UID: "900aeb01-050c-45b8-936c-e5f8d73ebeb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.981760 4947 generic.go:334] "Generic (PLEG): container finished" podID="06282146-8047-4104-b189-c896e5b7f8b9" containerID="8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b" exitCode=0 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.981818 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerDied","Data":"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b"} Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.981868 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwwnp" event={"ID":"06282146-8047-4104-b189-c896e5b7f8b9","Type":"ContainerDied","Data":"b331dec527a60132595158dee76520a26cd144ddc3aa45e156eaf1db6341fcb3"} Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.981888 4947 scope.go:117] "RemoveContainer" containerID="8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.982025 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwwnp" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986070 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content\") pod \"06282146-8047-4104-b189-c896e5b7f8b9\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986299 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content\") pod \"ad96bcad-395b-4844-9992-00acdf7436c2\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986368 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities\") pod \"ad96bcad-395b-4844-9992-00acdf7436c2\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986401 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca\") pod \"fa35d682-53d6-4191-9cf9-f48b9f74e858\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986477 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhdwv\" (UniqueName: \"kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv\") pod \"ad96bcad-395b-4844-9992-00acdf7436c2\" (UID: \"ad96bcad-395b-4844-9992-00acdf7436c2\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986543 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8th5h\" (UniqueName: \"kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h\") pod \"06282146-8047-4104-b189-c896e5b7f8b9\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986590 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxnl4\" (UniqueName: \"kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4\") pod \"fa35d682-53d6-4191-9cf9-f48b9f74e858\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986621 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics\") pod \"fa35d682-53d6-4191-9cf9-f48b9f74e858\" (UID: \"fa35d682-53d6-4191-9cf9-f48b9f74e858\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986648 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities\") pod \"06282146-8047-4104-b189-c896e5b7f8b9\" (UID: \"06282146-8047-4104-b189-c896e5b7f8b9\") " Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986883 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxzk6\" (UniqueName: \"kubernetes.io/projected/900aeb01-050c-45b8-936c-e5f8d73ebeb5-kube-api-access-vxzk6\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986902 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.986914 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900aeb01-050c-45b8-936c-e5f8d73ebeb5-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.987570 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities" (OuterVolumeSpecName: "utilities") pod "ad96bcad-395b-4844-9992-00acdf7436c2" (UID: "ad96bcad-395b-4844-9992-00acdf7436c2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.987622 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "fa35d682-53d6-4191-9cf9-f48b9f74e858" (UID: "fa35d682-53d6-4191-9cf9-f48b9f74e858"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.988309 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities" (OuterVolumeSpecName: "utilities") pod "06282146-8047-4104-b189-c896e5b7f8b9" (UID: "06282146-8047-4104-b189-c896e5b7f8b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.991556 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4" (OuterVolumeSpecName: "kube-api-access-sxnl4") pod "fa35d682-53d6-4191-9cf9-f48b9f74e858" (UID: "fa35d682-53d6-4191-9cf9-f48b9f74e858"). InnerVolumeSpecName "kube-api-access-sxnl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.992518 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "fa35d682-53d6-4191-9cf9-f48b9f74e858" (UID: "fa35d682-53d6-4191-9cf9-f48b9f74e858"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.992540 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv" (OuterVolumeSpecName: "kube-api-access-fhdwv") pod "ad96bcad-395b-4844-9992-00acdf7436c2" (UID: "ad96bcad-395b-4844-9992-00acdf7436c2"). InnerVolumeSpecName "kube-api-access-fhdwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.992869 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h" (OuterVolumeSpecName: "kube-api-access-8th5h") pod "06282146-8047-4104-b189-c896e5b7f8b9" (UID: "06282146-8047-4104-b189-c896e5b7f8b9"). InnerVolumeSpecName "kube-api-access-8th5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.994574 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qzj76" Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.994650 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerDied","Data":"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1"} Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.994442 4947 generic.go:334] "Generic (PLEG): container finished" podID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerID="625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1" exitCode=0 Jan 25 00:15:24 crc kubenswrapper[4947]: I0125 00:15:24.995524 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qzj76" event={"ID":"900aeb01-050c-45b8-936c-e5f8d73ebeb5","Type":"ContainerDied","Data":"596449ceb20f31ed206815663af8903fec2583551204ec75b85d39be48c2895f"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.001225 4947 generic.go:334] "Generic (PLEG): container finished" podID="49263faf-29f4-481c-aafd-a271a29c209a" containerID="482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3" exitCode=0 Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.001297 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerDied","Data":"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.001323 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltw77" event={"ID":"49263faf-29f4-481c-aafd-a271a29c209a","Type":"ContainerDied","Data":"23655937ab043534ca01347d9a2964b60c41f2a6eae0705e6c094b13084701de"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.001389 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltw77" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.006335 4947 scope.go:117] "RemoveContainer" containerID="9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.008931 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbj6z"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.012109 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.012232 4947 generic.go:334] "Generic (PLEG): container finished" podID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerID="c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e" exitCode=0 Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.014234 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerDied","Data":"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.014774 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vx9fn" event={"ID":"fa35d682-53d6-4191-9cf9-f48b9f74e858","Type":"ContainerDied","Data":"2742ff9aa7f3cbee3d8389c7f258cc4ce04fcb1e9943ebf713523dc12c66fb09"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.021485 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "06282146-8047-4104-b189-c896e5b7f8b9" (UID: "06282146-8047-4104-b189-c896e5b7f8b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.024092 4947 scope.go:117] "RemoveContainer" containerID="b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.026904 4947 generic.go:334] "Generic (PLEG): container finished" podID="ad96bcad-395b-4844-9992-00acdf7436c2" containerID="d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289" exitCode=0 Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.027039 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-47m2l" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.027054 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerDied","Data":"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.028107 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-47m2l" event={"ID":"ad96bcad-395b-4844-9992-00acdf7436c2","Type":"ContainerDied","Data":"6cbc84951af1c9fb04adcfedc17cf7a2205629dcc8722ddaa8c1026d70782225"} Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.043321 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.048665 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vx9fn"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.053913 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad96bcad-395b-4844-9992-00acdf7436c2" (UID: "ad96bcad-395b-4844-9992-00acdf7436c2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.060727 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.063960 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qzj76"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.067208 4947 scope.go:117] "RemoveContainer" containerID="8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.067554 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b\": container with ID starting with 8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b not found: ID does not exist" containerID="8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.067584 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b"} err="failed to get container status \"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b\": rpc error: code = NotFound desc = could not find container \"8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b\": container with ID starting with 8115b93e0fd4495a2b68325eacc2bd399adaeffd03e7ffbff447c266b62f3c8b not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.067608 4947 scope.go:117] "RemoveContainer" containerID="9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.067968 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e\": container with ID starting with 9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e not found: ID does not exist" containerID="9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.067991 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e"} err="failed to get container status \"9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e\": rpc error: code = NotFound desc = could not find container \"9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e\": container with ID starting with 9e14cc4195bf11e8c81096f8e93eb602df46809eac413c4e97d8f0951f263a3e not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.068011 4947 scope.go:117] "RemoveContainer" containerID="b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.068222 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c\": container with ID starting with b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c not found: ID does not exist" containerID="b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.068334 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c"} err="failed to get container status \"b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c\": rpc error: code = NotFound desc = could not find container \"b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c\": container with ID starting with b68921fda52bf0941dd3182cdd91346027905a40aad8abb0a9e34eef1346172c not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.068354 4947 scope.go:117] "RemoveContainer" containerID="625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.088472 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content\") pod \"49263faf-29f4-481c-aafd-a271a29c209a\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.088551 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities\") pod \"49263faf-29f4-481c-aafd-a271a29c209a\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.088579 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ndnd\" (UniqueName: \"kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd\") pod \"49263faf-29f4-481c-aafd-a271a29c209a\" (UID: \"49263faf-29f4-481c-aafd-a271a29c209a\") " Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.089224 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities" (OuterVolumeSpecName: "utilities") pod "49263faf-29f4-481c-aafd-a271a29c209a" (UID: "49263faf-29f4-481c-aafd-a271a29c209a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090270 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090648 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad96bcad-395b-4844-9992-00acdf7436c2-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090745 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090763 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhdwv\" (UniqueName: \"kubernetes.io/projected/ad96bcad-395b-4844-9992-00acdf7436c2-kube-api-access-fhdwv\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090774 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090784 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8th5h\" (UniqueName: \"kubernetes.io/projected/06282146-8047-4104-b189-c896e5b7f8b9-kube-api-access-8th5h\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090793 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxnl4\" (UniqueName: \"kubernetes.io/projected/fa35d682-53d6-4191-9cf9-f48b9f74e858-kube-api-access-sxnl4\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090802 4947 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa35d682-53d6-4191-9cf9-f48b9f74e858-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090811 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.090819 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/06282146-8047-4104-b189-c896e5b7f8b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.092708 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd" (OuterVolumeSpecName: "kube-api-access-7ndnd") pod "49263faf-29f4-481c-aafd-a271a29c209a" (UID: "49263faf-29f4-481c-aafd-a271a29c209a"). InnerVolumeSpecName "kube-api-access-7ndnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.096508 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" path="/var/lib/kubelet/pods/900aeb01-050c-45b8-936c-e5f8d73ebeb5/volumes" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.096757 4947 scope.go:117] "RemoveContainer" containerID="43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.097171 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" path="/var/lib/kubelet/pods/fa35d682-53d6-4191-9cf9-f48b9f74e858/volumes" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.126780 4947 scope.go:117] "RemoveContainer" containerID="3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.145402 4947 scope.go:117] "RemoveContainer" containerID="625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.145992 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1\": container with ID starting with 625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1 not found: ID does not exist" containerID="625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.146101 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1"} err="failed to get container status \"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1\": rpc error: code = NotFound desc = could not find container \"625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1\": container with ID starting with 625f0cef66386e28287f85cd89001a6182e1a9e32fef13144f9306b8fd3bf8e1 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.146221 4947 scope.go:117] "RemoveContainer" containerID="43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.146702 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b\": container with ID starting with 43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b not found: ID does not exist" containerID="43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.146751 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b"} err="failed to get container status \"43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b\": rpc error: code = NotFound desc = could not find container \"43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b\": container with ID starting with 43d814190eebb93dd6bba60983f167e4e7d8eb0cd7dcc3adc29725b6c2f90c2b not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.146786 4947 scope.go:117] "RemoveContainer" containerID="3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.147155 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0\": container with ID starting with 3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0 not found: ID does not exist" containerID="3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.147205 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0"} err="failed to get container status \"3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0\": rpc error: code = NotFound desc = could not find container \"3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0\": container with ID starting with 3d383bbb6e2f77eb14a9b8a1dbd79f1c501ad3a1a7e6ec1145d397b4e8d1deb0 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.147242 4947 scope.go:117] "RemoveContainer" containerID="482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.162753 4947 scope.go:117] "RemoveContainer" containerID="4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.183355 4947 scope.go:117] "RemoveContainer" containerID="e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.192529 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ndnd\" (UniqueName: \"kubernetes.io/projected/49263faf-29f4-481c-aafd-a271a29c209a-kube-api-access-7ndnd\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.202108 4947 scope.go:117] "RemoveContainer" containerID="482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.202795 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3\": container with ID starting with 482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3 not found: ID does not exist" containerID="482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.202923 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3"} err="failed to get container status \"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3\": rpc error: code = NotFound desc = could not find container \"482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3\": container with ID starting with 482fba3b03a69741fd55f9c49368ba133c9fd2f319ef4e1223d2e1af6bddd6d3 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.203035 4947 scope.go:117] "RemoveContainer" containerID="4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.203804 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16\": container with ID starting with 4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16 not found: ID does not exist" containerID="4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.203856 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16"} err="failed to get container status \"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16\": rpc error: code = NotFound desc = could not find container \"4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16\": container with ID starting with 4542217595dd75eebca5222fcb5217ae0cff8df4479ef7c5469cd15bd336ba16 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.203886 4947 scope.go:117] "RemoveContainer" containerID="e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.204358 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116\": container with ID starting with e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116 not found: ID does not exist" containerID="e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.204391 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116"} err="failed to get container status \"e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116\": rpc error: code = NotFound desc = could not find container \"e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116\": container with ID starting with e3fbbc635210edef169ebdb44d20ee2090763de1a269a7c6409c3d6bf1dfb116 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.204410 4947 scope.go:117] "RemoveContainer" containerID="c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.223179 4947 scope.go:117] "RemoveContainer" containerID="7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.223919 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49263faf-29f4-481c-aafd-a271a29c209a" (UID: "49263faf-29f4-481c-aafd-a271a29c209a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.239627 4947 scope.go:117] "RemoveContainer" containerID="c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.240385 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e\": container with ID starting with c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e not found: ID does not exist" containerID="c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.240436 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e"} err="failed to get container status \"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e\": rpc error: code = NotFound desc = could not find container \"c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e\": container with ID starting with c76cf10a240dd3e6abf0e6c61fc65ab4c06d545e02cbd13850e4b3a1505a6b8e not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.240472 4947 scope.go:117] "RemoveContainer" containerID="7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.241030 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369\": container with ID starting with 7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369 not found: ID does not exist" containerID="7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.241057 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369"} err="failed to get container status \"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369\": rpc error: code = NotFound desc = could not find container \"7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369\": container with ID starting with 7a4ab516c4e08d71314a18aa4d477dfc08e5a573c06117757b88ea461a13c369 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.241075 4947 scope.go:117] "RemoveContainer" containerID="d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.261512 4947 scope.go:117] "RemoveContainer" containerID="f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.277698 4947 scope.go:117] "RemoveContainer" containerID="95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.294469 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49263faf-29f4-481c-aafd-a271a29c209a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.314930 4947 scope.go:117] "RemoveContainer" containerID="d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.316820 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289\": container with ID starting with d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289 not found: ID does not exist" containerID="d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.316872 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289"} err="failed to get container status \"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289\": rpc error: code = NotFound desc = could not find container \"d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289\": container with ID starting with d9ef5b68bf9e0e13efe5f814ee090df8faa653dec0c7b54fa9b8283ea6f30289 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.316906 4947 scope.go:117] "RemoveContainer" containerID="f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.318602 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7\": container with ID starting with f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7 not found: ID does not exist" containerID="f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.318632 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7"} err="failed to get container status \"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7\": rpc error: code = NotFound desc = could not find container \"f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7\": container with ID starting with f14331188acade273ec58d376546a2909cdf07b547bdce264ef978c7e825b2d7 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.318647 4947 scope.go:117] "RemoveContainer" containerID="95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643" Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.318912 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643\": container with ID starting with 95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643 not found: ID does not exist" containerID="95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.318929 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643"} err="failed to get container status \"95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643\": rpc error: code = NotFound desc = could not find container \"95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643\": container with ID starting with 95a8066ea5df0bbdd562663b51cd869b66acf5c56464c5567e5718c195b7c643 not found: ID does not exist" Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.321287 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.326941 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwwnp"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.351309 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.358560 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ltw77"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.365268 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:15:25 crc kubenswrapper[4947]: I0125 00:15:25.369405 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-47m2l"] Jan 25 00:15:25 crc kubenswrapper[4947]: E0125 00:15:25.460414 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49263faf_29f4_481c_aafd_a271a29c209a.slice/crio-23655937ab043534ca01347d9a2964b60c41f2a6eae0705e6c094b13084701de\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49263faf_29f4_481c_aafd_a271a29c209a.slice\": RecentStats: unable to find data in memory cache]" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.035430 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" event={"ID":"94a09856-1120-4003-a601-ee3c9121eb51","Type":"ContainerStarted","Data":"51d0122ca4b1dc3ac3173be4b1a89e3a7fcd2485ee9d57122adccc93f77adbb4"} Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.035772 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.035784 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" event={"ID":"94a09856-1120-4003-a601-ee3c9121eb51","Type":"ContainerStarted","Data":"1511ab3dd5a6cf0320e3eea749afafe601b00c1cb5791934931bffb2682194fa"} Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.039214 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.055934 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mbj6z" podStartSLOduration=2.055916232 podStartE2EDuration="2.055916232s" podCreationTimestamp="2026-01-25 00:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:15:26.055894301 +0000 UTC m=+365.288884761" watchObservedRunningTime="2026-01-25 00:15:26.055916232 +0000 UTC m=+365.288906672" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.640911 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lkvvh"] Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641167 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641184 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641198 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641205 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641221 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641229 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641241 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641249 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641261 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641269 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641281 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641288 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641300 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641307 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641317 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641325 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641334 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641342 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="extract-utilities" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641351 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641358 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641367 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641374 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641384 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641391 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="extract-content" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641401 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641408 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641528 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="49263faf-29f4-481c-aafd-a271a29c209a" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641541 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641556 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641565 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="900aeb01-050c-45b8-936c-e5f8d73ebeb5" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641576 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="06282146-8047-4104-b189-c896e5b7f8b9" containerName="registry-server" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641585 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: E0125 00:15:26.641696 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.641704 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa35d682-53d6-4191-9cf9-f48b9f74e858" containerName="marketplace-operator" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.642406 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.645870 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.647789 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkvvh"] Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.813350 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd72v\" (UniqueName: \"kubernetes.io/projected/8f150ea3-0af6-4206-9d74-e15f901e571b-kube-api-access-kd72v\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.813448 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-catalog-content\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.813514 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-utilities\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.914469 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd72v\" (UniqueName: \"kubernetes.io/projected/8f150ea3-0af6-4206-9d74-e15f901e571b-kube-api-access-kd72v\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.914619 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-catalog-content\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.915238 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-catalog-content\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.915395 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-utilities\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.915774 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f150ea3-0af6-4206-9d74-e15f901e571b-utilities\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.934819 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd72v\" (UniqueName: \"kubernetes.io/projected/8f150ea3-0af6-4206-9d74-e15f901e571b-kube-api-access-kd72v\") pod \"certified-operators-lkvvh\" (UID: \"8f150ea3-0af6-4206-9d74-e15f901e571b\") " pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:26 crc kubenswrapper[4947]: I0125 00:15:26.976915 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.096550 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06282146-8047-4104-b189-c896e5b7f8b9" path="/var/lib/kubelet/pods/06282146-8047-4104-b189-c896e5b7f8b9/volumes" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.097296 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49263faf-29f4-481c-aafd-a271a29c209a" path="/var/lib/kubelet/pods/49263faf-29f4-481c-aafd-a271a29c209a/volumes" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.097979 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad96bcad-395b-4844-9992-00acdf7436c2" path="/var/lib/kubelet/pods/ad96bcad-395b-4844-9992-00acdf7436c2/volumes" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.172985 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lkvvh"] Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.631035 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.634271 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.637555 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.643170 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.724704 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.724760 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.724788 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqff\" (UniqueName: \"kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.825535 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.825594 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.825617 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqff\" (UniqueName: \"kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.826207 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.826243 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.842913 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqff\" (UniqueName: \"kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff\") pod \"redhat-marketplace-hwxx4\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:27 crc kubenswrapper[4947]: I0125 00:15:27.954646 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:28 crc kubenswrapper[4947]: I0125 00:15:28.059358 4947 generic.go:334] "Generic (PLEG): container finished" podID="8f150ea3-0af6-4206-9d74-e15f901e571b" containerID="c325dde1e0617483647b9a62738677df1b58ca118c1311ba35f40be31385b72f" exitCode=0 Jan 25 00:15:28 crc kubenswrapper[4947]: I0125 00:15:28.059420 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkvvh" event={"ID":"8f150ea3-0af6-4206-9d74-e15f901e571b","Type":"ContainerDied","Data":"c325dde1e0617483647b9a62738677df1b58ca118c1311ba35f40be31385b72f"} Jan 25 00:15:28 crc kubenswrapper[4947]: I0125 00:15:28.059763 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkvvh" event={"ID":"8f150ea3-0af6-4206-9d74-e15f901e571b","Type":"ContainerStarted","Data":"343b3cc67b71960c0db3cda1ab8b65d5f69c686aa12b39af56ede8682e25082d"} Jan 25 00:15:28 crc kubenswrapper[4947]: I0125 00:15:28.392538 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:15:28 crc kubenswrapper[4947]: W0125 00:15:28.397676 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24ae0891_d29f_45fe_be30_a46f76a39dda.slice/crio-aea55c0ecb2555c344ea84eb69f61a3ca31d0786e037f7a2f29477da1e97cbae WatchSource:0}: Error finding container aea55c0ecb2555c344ea84eb69f61a3ca31d0786e037f7a2f29477da1e97cbae: Status 404 returned error can't find the container with id aea55c0ecb2555c344ea84eb69f61a3ca31d0786e037f7a2f29477da1e97cbae Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.036086 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m2ddl"] Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.039369 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.041056 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.054585 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m2ddl"] Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.065898 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkvvh" event={"ID":"8f150ea3-0af6-4206-9d74-e15f901e571b","Type":"ContainerStarted","Data":"a7c6bc9f465b60198c1a17fb7898cdb140f380c0e39c47f7d0c129d8e7b0123b"} Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.067646 4947 generic.go:334] "Generic (PLEG): container finished" podID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerID="3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97" exitCode=0 Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.067683 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerDied","Data":"3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97"} Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.067716 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerStarted","Data":"aea55c0ecb2555c344ea84eb69f61a3ca31d0786e037f7a2f29477da1e97cbae"} Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.149832 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-utilities\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.149897 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-catalog-content\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.149927 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q76j\" (UniqueName: \"kubernetes.io/projected/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-kube-api-access-9q76j\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.250932 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-utilities\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.251018 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-catalog-content\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.251043 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q76j\" (UniqueName: \"kubernetes.io/projected/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-kube-api-access-9q76j\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.252242 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-utilities\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.252379 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-catalog-content\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.269925 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q76j\" (UniqueName: \"kubernetes.io/projected/e8adfaf1-4e17-430c-970e-1cbf2e58c18a-kube-api-access-9q76j\") pod \"redhat-operators-m2ddl\" (UID: \"e8adfaf1-4e17-430c-970e-1cbf2e58c18a\") " pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.363698 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:29 crc kubenswrapper[4947]: I0125 00:15:29.772846 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m2ddl"] Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.028102 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g7982"] Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.029778 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.035566 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7982"] Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.038797 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.077064 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerStarted","Data":"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870"} Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.080997 4947 generic.go:334] "Generic (PLEG): container finished" podID="e8adfaf1-4e17-430c-970e-1cbf2e58c18a" containerID="8b87350da3c79cd080e933163c12159c1bea5e8c863270f954810d9873b9c4f6" exitCode=0 Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.081053 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2ddl" event={"ID":"e8adfaf1-4e17-430c-970e-1cbf2e58c18a","Type":"ContainerDied","Data":"8b87350da3c79cd080e933163c12159c1bea5e8c863270f954810d9873b9c4f6"} Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.081072 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2ddl" event={"ID":"e8adfaf1-4e17-430c-970e-1cbf2e58c18a","Type":"ContainerStarted","Data":"918fd92f2892d9b2c9e7e47c9bb4ea407961cfb8ea459b27470f445d87973b27"} Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.086741 4947 generic.go:334] "Generic (PLEG): container finished" podID="8f150ea3-0af6-4206-9d74-e15f901e571b" containerID="a7c6bc9f465b60198c1a17fb7898cdb140f380c0e39c47f7d0c129d8e7b0123b" exitCode=0 Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.087042 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkvvh" event={"ID":"8f150ea3-0af6-4206-9d74-e15f901e571b","Type":"ContainerDied","Data":"a7c6bc9f465b60198c1a17fb7898cdb140f380c0e39c47f7d0c129d8e7b0123b"} Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.162794 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-catalog-content\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.162868 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2btf\" (UniqueName: \"kubernetes.io/projected/5e39c693-6291-4810-863e-fd3e5cd45fbc-kube-api-access-h2btf\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.162890 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-utilities\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.264548 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-catalog-content\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.264635 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2btf\" (UniqueName: \"kubernetes.io/projected/5e39c693-6291-4810-863e-fd3e5cd45fbc-kube-api-access-h2btf\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.264668 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-utilities\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.265230 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-utilities\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.265507 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e39c693-6291-4810-863e-fd3e5cd45fbc-catalog-content\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.296313 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2btf\" (UniqueName: \"kubernetes.io/projected/5e39c693-6291-4810-863e-fd3e5cd45fbc-kube-api-access-h2btf\") pod \"community-operators-g7982\" (UID: \"5e39c693-6291-4810-863e-fd3e5cd45fbc\") " pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.353190 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.757403 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7982"] Jan 25 00:15:30 crc kubenswrapper[4947]: I0125 00:15:30.978683 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-bh4mm" Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.033611 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.108526 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lkvvh" event={"ID":"8f150ea3-0af6-4206-9d74-e15f901e571b","Type":"ContainerStarted","Data":"ce66a0985d325b64c31e4b39c0896b88ee430ddaccc6628719c4b0baa20664ca"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.111704 4947 generic.go:334] "Generic (PLEG): container finished" podID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerID="24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870" exitCode=0 Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.111783 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerDied","Data":"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.111804 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerStarted","Data":"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.115377 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2ddl" event={"ID":"e8adfaf1-4e17-430c-970e-1cbf2e58c18a","Type":"ContainerStarted","Data":"45b6caad89697237a873ac3dd397ab2e0d67a08d04b447395ae4e0b4fcea4215"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.117260 4947 generic.go:334] "Generic (PLEG): container finished" podID="5e39c693-6291-4810-863e-fd3e5cd45fbc" containerID="0d6bf80afeab0eb1908534cef87952e842b53fdac5af63be09110db5a79ae1e6" exitCode=0 Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.117316 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7982" event={"ID":"5e39c693-6291-4810-863e-fd3e5cd45fbc","Type":"ContainerDied","Data":"0d6bf80afeab0eb1908534cef87952e842b53fdac5af63be09110db5a79ae1e6"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.117345 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7982" event={"ID":"5e39c693-6291-4810-863e-fd3e5cd45fbc","Type":"ContainerStarted","Data":"3e42c04272d04088056859078860477dd028a9ec745baadff96bbd49b27609a8"} Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.170769 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hwxx4" podStartSLOduration=2.544359241 podStartE2EDuration="4.170740929s" podCreationTimestamp="2026-01-25 00:15:27 +0000 UTC" firstStartedPulling="2026-01-25 00:15:29.068908881 +0000 UTC m=+368.301899321" lastFinishedPulling="2026-01-25 00:15:30.695290579 +0000 UTC m=+369.928281009" observedRunningTime="2026-01-25 00:15:31.168641854 +0000 UTC m=+370.401632294" watchObservedRunningTime="2026-01-25 00:15:31.170740929 +0000 UTC m=+370.403731369" Jan 25 00:15:31 crc kubenswrapper[4947]: I0125 00:15:31.183623 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lkvvh" podStartSLOduration=2.711128419 podStartE2EDuration="5.18360395s" podCreationTimestamp="2026-01-25 00:15:26 +0000 UTC" firstStartedPulling="2026-01-25 00:15:28.06089756 +0000 UTC m=+367.293888000" lastFinishedPulling="2026-01-25 00:15:30.533373091 +0000 UTC m=+369.766363531" observedRunningTime="2026-01-25 00:15:31.183107297 +0000 UTC m=+370.416097737" watchObservedRunningTime="2026-01-25 00:15:31.18360395 +0000 UTC m=+370.416594390" Jan 25 00:15:32 crc kubenswrapper[4947]: I0125 00:15:32.127591 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7982" event={"ID":"5e39c693-6291-4810-863e-fd3e5cd45fbc","Type":"ContainerStarted","Data":"497fc5e2192eac91e48279f7de5d750ddcc29f1d25f92e38e4fb7244e602915a"} Jan 25 00:15:32 crc kubenswrapper[4947]: I0125 00:15:32.129937 4947 generic.go:334] "Generic (PLEG): container finished" podID="e8adfaf1-4e17-430c-970e-1cbf2e58c18a" containerID="45b6caad89697237a873ac3dd397ab2e0d67a08d04b447395ae4e0b4fcea4215" exitCode=0 Jan 25 00:15:32 crc kubenswrapper[4947]: I0125 00:15:32.130885 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2ddl" event={"ID":"e8adfaf1-4e17-430c-970e-1cbf2e58c18a","Type":"ContainerDied","Data":"45b6caad89697237a873ac3dd397ab2e0d67a08d04b447395ae4e0b4fcea4215"} Jan 25 00:15:33 crc kubenswrapper[4947]: I0125 00:15:33.137997 4947 generic.go:334] "Generic (PLEG): container finished" podID="5e39c693-6291-4810-863e-fd3e5cd45fbc" containerID="497fc5e2192eac91e48279f7de5d750ddcc29f1d25f92e38e4fb7244e602915a" exitCode=0 Jan 25 00:15:33 crc kubenswrapper[4947]: I0125 00:15:33.138079 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7982" event={"ID":"5e39c693-6291-4810-863e-fd3e5cd45fbc","Type":"ContainerDied","Data":"497fc5e2192eac91e48279f7de5d750ddcc29f1d25f92e38e4fb7244e602915a"} Jan 25 00:15:33 crc kubenswrapper[4947]: I0125 00:15:33.141792 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2ddl" event={"ID":"e8adfaf1-4e17-430c-970e-1cbf2e58c18a","Type":"ContainerStarted","Data":"e8b3d19625d955735254aa8987188f61e2f83fdd6e6b4484871567db3765741a"} Jan 25 00:15:33 crc kubenswrapper[4947]: I0125 00:15:33.179487 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m2ddl" podStartSLOduration=1.694324854 podStartE2EDuration="4.179463282s" podCreationTimestamp="2026-01-25 00:15:29 +0000 UTC" firstStartedPulling="2026-01-25 00:15:30.084411706 +0000 UTC m=+369.317402166" lastFinishedPulling="2026-01-25 00:15:32.569550154 +0000 UTC m=+371.802540594" observedRunningTime="2026-01-25 00:15:33.176340832 +0000 UTC m=+372.409331272" watchObservedRunningTime="2026-01-25 00:15:33.179463282 +0000 UTC m=+372.412453732" Jan 25 00:15:35 crc kubenswrapper[4947]: I0125 00:15:35.160035 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7982" event={"ID":"5e39c693-6291-4810-863e-fd3e5cd45fbc","Type":"ContainerStarted","Data":"c252e5c047622c2b6848df213b513c8bfc163d407948dc851411ef8c8d007320"} Jan 25 00:15:35 crc kubenswrapper[4947]: I0125 00:15:35.189606 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g7982" podStartSLOduration=2.6829224 podStartE2EDuration="5.189578163s" podCreationTimestamp="2026-01-25 00:15:30 +0000 UTC" firstStartedPulling="2026-01-25 00:15:31.126322542 +0000 UTC m=+370.359312982" lastFinishedPulling="2026-01-25 00:15:33.632978305 +0000 UTC m=+372.865968745" observedRunningTime="2026-01-25 00:15:35.180394965 +0000 UTC m=+374.413385425" watchObservedRunningTime="2026-01-25 00:15:35.189578163 +0000 UTC m=+374.422568603" Jan 25 00:15:36 crc kubenswrapper[4947]: I0125 00:15:36.978182 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:36 crc kubenswrapper[4947]: I0125 00:15:36.978542 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:37 crc kubenswrapper[4947]: I0125 00:15:37.038740 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:37 crc kubenswrapper[4947]: I0125 00:15:37.235307 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lkvvh" Jan 25 00:15:37 crc kubenswrapper[4947]: I0125 00:15:37.955213 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:37 crc kubenswrapper[4947]: I0125 00:15:37.955302 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:38 crc kubenswrapper[4947]: I0125 00:15:38.008571 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:38 crc kubenswrapper[4947]: I0125 00:15:38.242934 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:15:39 crc kubenswrapper[4947]: I0125 00:15:39.364267 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:39 crc kubenswrapper[4947]: I0125 00:15:39.364353 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:39 crc kubenswrapper[4947]: I0125 00:15:39.398147 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:40 crc kubenswrapper[4947]: I0125 00:15:40.241906 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m2ddl" Jan 25 00:15:40 crc kubenswrapper[4947]: I0125 00:15:40.354221 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:40 crc kubenswrapper[4947]: I0125 00:15:40.354263 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:40 crc kubenswrapper[4947]: I0125 00:15:40.392034 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:41 crc kubenswrapper[4947]: I0125 00:15:41.245057 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g7982" Jan 25 00:15:45 crc kubenswrapper[4947]: I0125 00:15:45.585327 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:15:45 crc kubenswrapper[4947]: I0125 00:15:45.585904 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerName="controller-manager" containerID="cri-o://d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea" gracePeriod=30 Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.072434 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.072827 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.072886 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.073754 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.073847 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c" gracePeriod=600 Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.813849 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.847480 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-z98df"] Jan 25 00:15:47 crc kubenswrapper[4947]: E0125 00:15:47.847701 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerName="controller-manager" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.847712 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerName="controller-manager" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.847809 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerName="controller-manager" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.848176 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.859329 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-z98df"] Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.900662 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles\") pod \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.900737 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config\") pod \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.900761 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca\") pod \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.900811 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert\") pod \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.900843 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mscq\" (UniqueName: \"kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq\") pod \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\" (UID: \"02b69aab-7cbd-4f58-8756-c1c5b615c33d\") " Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.901095 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e897be71-7c54-40b1-a607-b102af1b8a61-serving-cert\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.901144 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-client-ca\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.901191 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-config\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.901240 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.901275 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swdzm\" (UniqueName: \"kubernetes.io/projected/e897be71-7c54-40b1-a607-b102af1b8a61-kube-api-access-swdzm\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.902412 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca" (OuterVolumeSpecName: "client-ca") pod "02b69aab-7cbd-4f58-8756-c1c5b615c33d" (UID: "02b69aab-7cbd-4f58-8756-c1c5b615c33d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.902568 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config" (OuterVolumeSpecName: "config") pod "02b69aab-7cbd-4f58-8756-c1c5b615c33d" (UID: "02b69aab-7cbd-4f58-8756-c1c5b615c33d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.902621 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "02b69aab-7cbd-4f58-8756-c1c5b615c33d" (UID: "02b69aab-7cbd-4f58-8756-c1c5b615c33d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.907660 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "02b69aab-7cbd-4f58-8756-c1c5b615c33d" (UID: "02b69aab-7cbd-4f58-8756-c1c5b615c33d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:15:47 crc kubenswrapper[4947]: I0125 00:15:47.911081 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq" (OuterVolumeSpecName: "kube-api-access-8mscq") pod "02b69aab-7cbd-4f58-8756-c1c5b615c33d" (UID: "02b69aab-7cbd-4f58-8756-c1c5b615c33d"). InnerVolumeSpecName "kube-api-access-8mscq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002373 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e897be71-7c54-40b1-a607-b102af1b8a61-serving-cert\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002679 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-client-ca\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002713 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-config\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002749 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002786 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swdzm\" (UniqueName: \"kubernetes.io/projected/e897be71-7c54-40b1-a607-b102af1b8a61-kube-api-access-swdzm\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002833 4947 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002847 4947 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002858 4947 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/02b69aab-7cbd-4f58-8756-c1c5b615c33d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002871 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mscq\" (UniqueName: \"kubernetes.io/projected/02b69aab-7cbd-4f58-8756-c1c5b615c33d-kube-api-access-8mscq\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.002884 4947 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/02b69aab-7cbd-4f58-8756-c1c5b615c33d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.004044 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-proxy-ca-bundles\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.004100 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-client-ca\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.004254 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e897be71-7c54-40b1-a607-b102af1b8a61-config\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.006014 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e897be71-7c54-40b1-a607-b102af1b8a61-serving-cert\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.021184 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swdzm\" (UniqueName: \"kubernetes.io/projected/e897be71-7c54-40b1-a607-b102af1b8a61-kube-api-access-swdzm\") pod \"controller-manager-6cb4bd4595-z98df\" (UID: \"e897be71-7c54-40b1-a607-b102af1b8a61\") " pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.168269 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.244449 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c" exitCode=0 Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.244519 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c"} Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.244608 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3"} Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.244637 4947 scope.go:117] "RemoveContainer" containerID="d4b1bcb51e9a15e6b99838c616d6e4c6ca989bf2031bcf574fb9d1c2b86176fc" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.249366 4947 generic.go:334] "Generic (PLEG): container finished" podID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerID="d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea" exitCode=0 Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.249708 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.250177 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" event={"ID":"02b69aab-7cbd-4f58-8756-c1c5b615c33d","Type":"ContainerDied","Data":"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea"} Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.250205 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" event={"ID":"02b69aab-7cbd-4f58-8756-c1c5b615c33d","Type":"ContainerDied","Data":"38a5c9cb609e714547362a4cc611ea212282d2141fd9223ceadf053b5b3b5a11"} Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.288967 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.294771 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc"] Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.585197 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6cb4bd4595-z98df"] Jan 25 00:15:48 crc kubenswrapper[4947]: W0125 00:15:48.601246 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode897be71_7c54_40b1_a607_b102af1b8a61.slice/crio-912113a1bd16a6e817afb1d4e20222dd42d036dbe9bdba110f58925bbe4f61e7 WatchSource:0}: Error finding container 912113a1bd16a6e817afb1d4e20222dd42d036dbe9bdba110f58925bbe4f61e7: Status 404 returned error can't find the container with id 912113a1bd16a6e817afb1d4e20222dd42d036dbe9bdba110f58925bbe4f61e7 Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.622204 4947 patch_prober.go:28] interesting pod/controller-manager-5b9d4449c6-5zzzc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.622401 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5b9d4449c6-5zzzc" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.895155 4947 scope.go:117] "RemoveContainer" containerID="d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.912525 4947 scope.go:117] "RemoveContainer" containerID="d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea" Jan 25 00:15:48 crc kubenswrapper[4947]: E0125 00:15:48.913018 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea\": container with ID starting with d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea not found: ID does not exist" containerID="d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea" Jan 25 00:15:48 crc kubenswrapper[4947]: I0125 00:15:48.913058 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea"} err="failed to get container status \"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea\": rpc error: code = NotFound desc = could not find container \"d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea\": container with ID starting with d0735a74ff405108e89ba77cc47ad07d34a08e6bc5aebf4ae849139038dfd8ea not found: ID does not exist" Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.096959 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02b69aab-7cbd-4f58-8756-c1c5b615c33d" path="/var/lib/kubelet/pods/02b69aab-7cbd-4f58-8756-c1c5b615c33d/volumes" Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.256216 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" event={"ID":"e897be71-7c54-40b1-a607-b102af1b8a61","Type":"ContainerStarted","Data":"efb9b9308bc76e855b422f0c911184e13b7958d025c2c710defa2cfdc8709b62"} Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.256253 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" event={"ID":"e897be71-7c54-40b1-a607-b102af1b8a61","Type":"ContainerStarted","Data":"912113a1bd16a6e817afb1d4e20222dd42d036dbe9bdba110f58925bbe4f61e7"} Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.256381 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.261030 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" Jan 25 00:15:49 crc kubenswrapper[4947]: I0125 00:15:49.279188 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6cb4bd4595-z98df" podStartSLOduration=4.279172229 podStartE2EDuration="4.279172229s" podCreationTimestamp="2026-01-25 00:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:15:49.276062039 +0000 UTC m=+388.509052479" watchObservedRunningTime="2026-01-25 00:15:49.279172229 +0000 UTC m=+388.512162669" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.078651 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" podUID="ce1b6238-9a41-4472-accc-e4d7d6371357" containerName="registry" containerID="cri-o://51f2c364bfae060665da042ae7dc21f336f532bdfa00072378e4e19dbb303585" gracePeriod=30 Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.323243 4947 generic.go:334] "Generic (PLEG): container finished" podID="ce1b6238-9a41-4472-accc-e4d7d6371357" containerID="51f2c364bfae060665da042ae7dc21f336f532bdfa00072378e4e19dbb303585" exitCode=0 Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.323298 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" event={"ID":"ce1b6238-9a41-4472-accc-e4d7d6371357","Type":"ContainerDied","Data":"51f2c364bfae060665da042ae7dc21f336f532bdfa00072378e4e19dbb303585"} Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.633023 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.723375 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.724023 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.724271 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww4b6\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.724541 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.724788 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.725356 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.725765 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.726020 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token\") pod \"ce1b6238-9a41-4472-accc-e4d7d6371357\" (UID: \"ce1b6238-9a41-4472-accc-e4d7d6371357\") " Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.725822 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.726665 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.727044 4947 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.729275 4947 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/ce1b6238-9a41-4472-accc-e4d7d6371357-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.729904 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.730178 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.730322 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6" (OuterVolumeSpecName: "kube-api-access-ww4b6") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "kube-api-access-ww4b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.732410 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.740272 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.746246 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "ce1b6238-9a41-4472-accc-e4d7d6371357" (UID: "ce1b6238-9a41-4472-accc-e4d7d6371357"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.831189 4947 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.831231 4947 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.831246 4947 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/ce1b6238-9a41-4472-accc-e4d7d6371357-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.831261 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww4b6\" (UniqueName: \"kubernetes.io/projected/ce1b6238-9a41-4472-accc-e4d7d6371357-kube-api-access-ww4b6\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:56 crc kubenswrapper[4947]: I0125 00:15:56.831273 4947 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/ce1b6238-9a41-4472-accc-e4d7d6371357-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 25 00:15:57 crc kubenswrapper[4947]: I0125 00:15:57.334549 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" event={"ID":"ce1b6238-9a41-4472-accc-e4d7d6371357","Type":"ContainerDied","Data":"8aa2ec1702299cb0f2f7ebe9da84ffc79ac7ec1919bcb49ddb3c081345236f17"} Jan 25 00:15:57 crc kubenswrapper[4947]: I0125 00:15:57.334631 4947 scope.go:117] "RemoveContainer" containerID="51f2c364bfae060665da042ae7dc21f336f532bdfa00072378e4e19dbb303585" Jan 25 00:15:57 crc kubenswrapper[4947]: I0125 00:15:57.334642 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mprs4" Jan 25 00:15:57 crc kubenswrapper[4947]: I0125 00:15:57.365810 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:15:57 crc kubenswrapper[4947]: I0125 00:15:57.373997 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mprs4"] Jan 25 00:15:59 crc kubenswrapper[4947]: I0125 00:15:59.101505 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce1b6238-9a41-4472-accc-e4d7d6371357" path="/var/lib/kubelet/pods/ce1b6238-9a41-4472-accc-e4d7d6371357/volumes" Jan 25 00:17:47 crc kubenswrapper[4947]: I0125 00:17:47.072771 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:17:47 crc kubenswrapper[4947]: I0125 00:17:47.073508 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:18:17 crc kubenswrapper[4947]: I0125 00:18:17.073604 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:18:17 crc kubenswrapper[4947]: I0125 00:18:17.074285 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.072624 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.073719 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.073792 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.074873 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.075004 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3" gracePeriod=600 Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.454242 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3" exitCode=0 Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.454336 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3"} Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.454721 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935"} Jan 25 00:18:47 crc kubenswrapper[4947]: I0125 00:18:47.454816 4947 scope.go:117] "RemoveContainer" containerID="6a30b7c64683957a95e052711b4800e7471651a4fd592c025524ce839a16c59c" Jan 25 00:19:21 crc kubenswrapper[4947]: I0125 00:19:21.472370 4947 scope.go:117] "RemoveContainer" containerID="87af76d9cedf9765995fdba192251417d6cb96cc8dbaac5f8d89ebd77523cb24" Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.617576 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvfwz"] Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.618935 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-controller" containerID="cri-o://d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619174 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="sbdb" containerID="cri-o://f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619216 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="nbdb" containerID="cri-o://f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619251 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="northd" containerID="cri-o://ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619285 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619316 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-node" containerID="cri-o://8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.619358 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-acl-logging" containerID="cri-o://dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.659902 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" containerID="cri-o://c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" gracePeriod=30 Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.960209 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/3.log" Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.962954 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovn-acl-logging/0.log" Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.963855 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovn-controller/0.log" Jan 25 00:20:18 crc kubenswrapper[4947]: I0125 00:20:18.964380 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008157 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008209 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008238 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008267 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008287 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008307 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008328 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008348 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008367 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008388 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008465 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008498 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008526 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008544 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008570 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008568 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash" (OuterVolumeSpecName: "host-slash") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008596 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008643 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008720 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008686 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket" (OuterVolumeSpecName: "log-socket") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008754 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008811 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log" (OuterVolumeSpecName: "node-log") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008832 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008787 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008866 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008895 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.008935 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009040 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh6bp\" (UniqueName: \"kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp\") pod \"8bf5f940-5287-40f1-b208-535cdfcb0054\" (UID: \"8bf5f940-5287-40f1-b208-535cdfcb0054\") " Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009059 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009185 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009387 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009537 4947 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009574 4947 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-node-log\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009601 4947 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-log-socket\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009627 4947 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009674 4947 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009700 4947 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-slash\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009726 4947 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009501 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009587 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009636 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009671 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.009823 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.010609 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.015648 4947 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.015726 4947 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.020295 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.025471 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp" (OuterVolumeSpecName: "kube-api-access-xh6bp") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "kube-api-access-xh6bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.031273 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8bf5f940-5287-40f1-b208-535cdfcb0054" (UID: "8bf5f940-5287-40f1-b208-535cdfcb0054"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.056657 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-rqxpr"] Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057072 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-acl-logging" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057106 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-acl-logging" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057157 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="sbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057172 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="sbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057205 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057221 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057243 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce1b6238-9a41-4472-accc-e4d7d6371357" containerName="registry" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057256 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce1b6238-9a41-4472-accc-e4d7d6371357" containerName="registry" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057273 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057286 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057307 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-node" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057320 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-node" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057336 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057351 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057368 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057381 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057400 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="northd" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057413 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="northd" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057428 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="nbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057444 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="nbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057462 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057475 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057492 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kubecfg-setup" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057505 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kubecfg-setup" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057522 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-ovn-metrics" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057538 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-ovn-metrics" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.057558 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057572 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057770 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-ovn-metrics" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057798 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057822 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057836 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057854 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057870 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovn-acl-logging" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057885 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="northd" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057903 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce1b6238-9a41-4472-accc-e4d7d6371357" containerName="registry" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057922 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="kube-rbac-proxy-node" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057940 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="sbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.057958 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="nbdb" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.058398 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.058423 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerName="ovnkube-controller" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.065652 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.083330 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovnkube-controller/3.log" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.086406 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovn-acl-logging/0.log" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087093 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fvfwz_8bf5f940-5287-40f1-b208-535cdfcb0054/ovn-controller/0.log" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087702 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087755 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087771 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087787 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087801 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087795 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087874 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087890 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087914 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087817 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" exitCode=0 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087965 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" exitCode=143 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087981 4947 generic.go:334] "Generic (PLEG): container finished" podID="8bf5f940-5287-40f1-b208-535cdfcb0054" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" exitCode=143 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.087894 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088180 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088209 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088230 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088252 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088276 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088289 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088301 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088313 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088324 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088334 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088345 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088356 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088370 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088390 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088403 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088414 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088427 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088438 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088449 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088462 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088472 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088483 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088494 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088510 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088526 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088539 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088552 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088564 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088610 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088621 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088631 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.088642 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.089734 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.089746 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.091772 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/2.log" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.101497 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/1.log" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.101571 4947 generic.go:334] "Generic (PLEG): container finished" podID="2d914454-2c17-47f2-aa53-aba3bfaad296" containerID="c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032" exitCode=2 Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115663 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fvfwz" event={"ID":"8bf5f940-5287-40f1-b208-535cdfcb0054","Type":"ContainerDied","Data":"ab91914d18e527be722f5e70489e90096dc0e627d44b69e63be506f96778e303"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115727 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115753 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115763 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115775 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115784 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115792 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115800 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115812 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115821 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115829 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115853 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerDied","Data":"c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.115873 4947 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470"} Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.116408 4947 scope.go:117] "RemoveContainer" containerID="c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.116690 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9fspn_openshift-multus(2d914454-2c17-47f2-aa53-aba3bfaad296)\"" pod="openshift-multus/multus-9fspn" podUID="2d914454-2c17-47f2-aa53-aba3bfaad296" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.118454 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-kubelet\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.118525 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-netd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.118841 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-var-lib-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.119003 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-etc-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.119028 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-env-overrides\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.120394 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-systemd-units\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.120629 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-ovn\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.120758 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-node-log\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.120808 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovn-node-metrics-cert\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.120929 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-config\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121003 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmcj5\" (UniqueName: \"kubernetes.io/projected/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-kube-api-access-bmcj5\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121030 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-systemd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121097 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121156 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121192 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-netns\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121243 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121277 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-slash\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121337 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-log-socket\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.121363 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-bin\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122354 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-script-lib\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122525 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122542 4947 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122555 4947 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122568 4947 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122579 4947 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122588 4947 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122598 4947 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8bf5f940-5287-40f1-b208-535cdfcb0054-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122609 4947 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122622 4947 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8bf5f940-5287-40f1-b208-535cdfcb0054-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122638 4947 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8bf5f940-5287-40f1-b208-535cdfcb0054-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.122652 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh6bp\" (UniqueName: \"kubernetes.io/projected/8bf5f940-5287-40f1-b208-535cdfcb0054-kube-api-access-xh6bp\") on node \"crc\" DevicePath \"\"" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.136616 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvfwz"] Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.138698 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.144089 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fvfwz"] Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.180972 4947 scope.go:117] "RemoveContainer" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.199043 4947 scope.go:117] "RemoveContainer" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.218616 4947 scope.go:117] "RemoveContainer" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224094 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224165 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224214 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-netns\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224236 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224236 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224276 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-slash\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224309 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-log-socket\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224308 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-run-netns\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224351 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224355 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-bin\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224405 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-bin\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224413 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-script-lib\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224481 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-kubelet\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224526 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-netd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224559 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-var-lib-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224600 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-etc-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224689 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-var-lib-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224750 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-kubelet\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224767 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-cni-netd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224853 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-log-socket\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224896 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-etc-openvswitch\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.224963 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-host-slash\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225085 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-env-overrides\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225116 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-systemd-units\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225170 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-ovn\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225228 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-systemd-units\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225277 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovn-node-metrics-cert\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225326 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-ovn\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225339 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-script-lib\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225369 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-node-log\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225400 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-node-log\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.225435 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-config\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.226204 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovnkube-config\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.226204 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmcj5\" (UniqueName: \"kubernetes.io/projected/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-kube-api-access-bmcj5\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.226240 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-env-overrides\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.226283 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-systemd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.226262 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-run-systemd\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.230434 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-ovn-node-metrics-cert\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.238110 4947 scope.go:117] "RemoveContainer" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.247369 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmcj5\" (UniqueName: \"kubernetes.io/projected/26019576-c357-40d3-a0aa-8dcc2ad9d1fa-kube-api-access-bmcj5\") pod \"ovnkube-node-rqxpr\" (UID: \"26019576-c357-40d3-a0aa-8dcc2ad9d1fa\") " pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.258978 4947 scope.go:117] "RemoveContainer" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.280521 4947 scope.go:117] "RemoveContainer" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.302479 4947 scope.go:117] "RemoveContainer" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.322831 4947 scope.go:117] "RemoveContainer" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.360530 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.361401 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.361500 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} err="failed to get container status \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.361550 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.362281 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": container with ID starting with a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e not found: ID does not exist" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.362377 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} err="failed to get container status \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": rpc error: code = NotFound desc = could not find container \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": container with ID starting with a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.362453 4947 scope.go:117] "RemoveContainer" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.363021 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": container with ID starting with f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3 not found: ID does not exist" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.363110 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} err="failed to get container status \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": rpc error: code = NotFound desc = could not find container \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": container with ID starting with f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.363188 4947 scope.go:117] "RemoveContainer" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.363748 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": container with ID starting with f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a not found: ID does not exist" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.363841 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} err="failed to get container status \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": rpc error: code = NotFound desc = could not find container \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": container with ID starting with f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.363913 4947 scope.go:117] "RemoveContainer" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.364793 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": container with ID starting with ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f not found: ID does not exist" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.364868 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} err="failed to get container status \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": rpc error: code = NotFound desc = could not find container \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": container with ID starting with ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.364918 4947 scope.go:117] "RemoveContainer" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.365647 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": container with ID starting with dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d not found: ID does not exist" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.365702 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} err="failed to get container status \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": rpc error: code = NotFound desc = could not find container \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": container with ID starting with dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.365730 4947 scope.go:117] "RemoveContainer" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.366436 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": container with ID starting with 8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53 not found: ID does not exist" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.366659 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} err="failed to get container status \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": rpc error: code = NotFound desc = could not find container \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": container with ID starting with 8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.366779 4947 scope.go:117] "RemoveContainer" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.367525 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": container with ID starting with dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7 not found: ID does not exist" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.367565 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} err="failed to get container status \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": rpc error: code = NotFound desc = could not find container \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": container with ID starting with dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.367588 4947 scope.go:117] "RemoveContainer" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.368103 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": container with ID starting with d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a not found: ID does not exist" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.368216 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} err="failed to get container status \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": rpc error: code = NotFound desc = could not find container \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": container with ID starting with d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.368289 4947 scope.go:117] "RemoveContainer" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: E0125 00:20:19.368988 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": container with ID starting with dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1 not found: ID does not exist" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.369055 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} err="failed to get container status \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": rpc error: code = NotFound desc = could not find container \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": container with ID starting with dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.369104 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.370286 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} err="failed to get container status \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.370378 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.371081 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} err="failed to get container status \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": rpc error: code = NotFound desc = could not find container \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": container with ID starting with a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.371177 4947 scope.go:117] "RemoveContainer" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.371692 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} err="failed to get container status \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": rpc error: code = NotFound desc = could not find container \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": container with ID starting with f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.371794 4947 scope.go:117] "RemoveContainer" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.372296 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} err="failed to get container status \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": rpc error: code = NotFound desc = could not find container \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": container with ID starting with f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.372352 4947 scope.go:117] "RemoveContainer" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.372705 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} err="failed to get container status \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": rpc error: code = NotFound desc = could not find container \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": container with ID starting with ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.372755 4947 scope.go:117] "RemoveContainer" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.373324 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} err="failed to get container status \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": rpc error: code = NotFound desc = could not find container \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": container with ID starting with dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.373384 4947 scope.go:117] "RemoveContainer" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.373933 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} err="failed to get container status \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": rpc error: code = NotFound desc = could not find container \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": container with ID starting with 8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.374065 4947 scope.go:117] "RemoveContainer" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.374962 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} err="failed to get container status \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": rpc error: code = NotFound desc = could not find container \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": container with ID starting with dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.374998 4947 scope.go:117] "RemoveContainer" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.375657 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} err="failed to get container status \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": rpc error: code = NotFound desc = could not find container \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": container with ID starting with d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.375742 4947 scope.go:117] "RemoveContainer" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.376316 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} err="failed to get container status \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": rpc error: code = NotFound desc = could not find container \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": container with ID starting with dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.376355 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.377185 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} err="failed to get container status \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.377290 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.377795 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} err="failed to get container status \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": rpc error: code = NotFound desc = could not find container \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": container with ID starting with a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.377831 4947 scope.go:117] "RemoveContainer" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.378455 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} err="failed to get container status \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": rpc error: code = NotFound desc = could not find container \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": container with ID starting with f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.378558 4947 scope.go:117] "RemoveContainer" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.379198 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} err="failed to get container status \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": rpc error: code = NotFound desc = could not find container \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": container with ID starting with f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.379233 4947 scope.go:117] "RemoveContainer" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.379985 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} err="failed to get container status \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": rpc error: code = NotFound desc = could not find container \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": container with ID starting with ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.380045 4947 scope.go:117] "RemoveContainer" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.380559 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} err="failed to get container status \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": rpc error: code = NotFound desc = could not find container \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": container with ID starting with dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.380593 4947 scope.go:117] "RemoveContainer" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.388685 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} err="failed to get container status \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": rpc error: code = NotFound desc = could not find container \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": container with ID starting with 8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.388787 4947 scope.go:117] "RemoveContainer" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.388876 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.390265 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} err="failed to get container status \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": rpc error: code = NotFound desc = could not find container \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": container with ID starting with dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.390326 4947 scope.go:117] "RemoveContainer" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.391175 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} err="failed to get container status \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": rpc error: code = NotFound desc = could not find container \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": container with ID starting with d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.391260 4947 scope.go:117] "RemoveContainer" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.391833 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} err="failed to get container status \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": rpc error: code = NotFound desc = could not find container \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": container with ID starting with dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.391885 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.392740 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} err="failed to get container status \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.392775 4947 scope.go:117] "RemoveContainer" containerID="a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.393892 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e"} err="failed to get container status \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": rpc error: code = NotFound desc = could not find container \"a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e\": container with ID starting with a0fb213e40c4e4a8c9efacb77e95185f91ee01863694c7561367d6829823260e not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.393982 4947 scope.go:117] "RemoveContainer" containerID="f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.394929 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3"} err="failed to get container status \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": rpc error: code = NotFound desc = could not find container \"f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3\": container with ID starting with f31485e2c1ab758a293443e6d3506506d247c11ef604d6e5d299720de2981ff3 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.395031 4947 scope.go:117] "RemoveContainer" containerID="f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.395621 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a"} err="failed to get container status \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": rpc error: code = NotFound desc = could not find container \"f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a\": container with ID starting with f711a1f56fa016c87aa93800e9994c75f3a180c55f46f4cfc2ee1bfb4f1f0b8a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.395658 4947 scope.go:117] "RemoveContainer" containerID="ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.396471 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f"} err="failed to get container status \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": rpc error: code = NotFound desc = could not find container \"ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f\": container with ID starting with ea6c1427f65d4cb1f8e097c0ffbe16ef422e298f7f5c6675b909533eacc4849f not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.396517 4947 scope.go:117] "RemoveContainer" containerID="dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.397275 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d"} err="failed to get container status \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": rpc error: code = NotFound desc = could not find container \"dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d\": container with ID starting with dbddbcaef23f7d1ca94b5a059c35e2c828404dd98c1df8d467fbc48d4c269d7d not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.397314 4947 scope.go:117] "RemoveContainer" containerID="8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.397919 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53"} err="failed to get container status \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": rpc error: code = NotFound desc = could not find container \"8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53\": container with ID starting with 8f230ebe8c2e4659a27693831e9477c97eab4fe0d933d861d6052e48c8c83a53 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.397969 4947 scope.go:117] "RemoveContainer" containerID="dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.398528 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7"} err="failed to get container status \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": rpc error: code = NotFound desc = could not find container \"dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7\": container with ID starting with dab79660723e52e592af02a0f77b5d7de44b7c76a1f0d9c2024cabd02284f3e7 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.398565 4947 scope.go:117] "RemoveContainer" containerID="d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.399191 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a"} err="failed to get container status \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": rpc error: code = NotFound desc = could not find container \"d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a\": container with ID starting with d7c4ed3210501f223d866d8b8d9fdd53663810ea0a8c80954b65c897f945f86a not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.399233 4947 scope.go:117] "RemoveContainer" containerID="dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.399663 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1"} err="failed to get container status \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": rpc error: code = NotFound desc = could not find container \"dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1\": container with ID starting with dbd15a54e9ab474166ecadcc4f7e3c3bf170ea13f3a35b9f75bda1c0427651a1 not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.399705 4947 scope.go:117] "RemoveContainer" containerID="c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c" Jan 25 00:20:19 crc kubenswrapper[4947]: I0125 00:20:19.400159 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c"} err="failed to get container status \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": rpc error: code = NotFound desc = could not find container \"c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c\": container with ID starting with c28cda7e668e9892837bdba11dd8b674f26cd5e0b6f7434a0e2c674428f8ca9c not found: ID does not exist" Jan 25 00:20:19 crc kubenswrapper[4947]: W0125 00:20:19.425024 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26019576_c357_40d3_a0aa_8dcc2ad9d1fa.slice/crio-7add459e945719b87e9d9432e381e9c983c1be15297d09f282af04213e0a5f77 WatchSource:0}: Error finding container 7add459e945719b87e9d9432e381e9c983c1be15297d09f282af04213e0a5f77: Status 404 returned error can't find the container with id 7add459e945719b87e9d9432e381e9c983c1be15297d09f282af04213e0a5f77 Jan 25 00:20:20 crc kubenswrapper[4947]: I0125 00:20:20.117052 4947 generic.go:334] "Generic (PLEG): container finished" podID="26019576-c357-40d3-a0aa-8dcc2ad9d1fa" containerID="848d58f9bb4672b4ee2b5cc6cfc9cc94bf7b1a5819ab33ba7dbbe132658a0fbf" exitCode=0 Jan 25 00:20:20 crc kubenswrapper[4947]: I0125 00:20:20.117114 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerDied","Data":"848d58f9bb4672b4ee2b5cc6cfc9cc94bf7b1a5819ab33ba7dbbe132658a0fbf"} Jan 25 00:20:20 crc kubenswrapper[4947]: I0125 00:20:20.117213 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"7add459e945719b87e9d9432e381e9c983c1be15297d09f282af04213e0a5f77"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.103842 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf5f940-5287-40f1-b208-535cdfcb0054" path="/var/lib/kubelet/pods/8bf5f940-5287-40f1-b208-535cdfcb0054/volumes" Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140194 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"6c40e7aa0f5c145e7e67d7194919c662fb4abe38b3ca460972b8717bd1a79339"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140270 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"c22f3919103aef95055fd843f7ceaf97753d1b9c8c1a736f00131c3496454dbf"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140284 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"008717caf9281dc7b2d7b230e0e01d18dbb3014f90629d681345ede2636282ce"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140294 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"0019d2e51271ed6a4d947dcdb47b1fbce4e27d1c84e35bcd08294986f721ae9b"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140305 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"06d6f997cea4cb672b8347913be13e77780b233c528828b0a18a29a7153fc3e8"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.140315 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"88bd81431345d87b3a9aa1e630bc8abfc3a6b6277d7b01990e79770a67584d34"} Jan 25 00:20:21 crc kubenswrapper[4947]: I0125 00:20:21.537069 4947 scope.go:117] "RemoveContainer" containerID="6c0ba6afdf615533c6d174c24f3d76e5070b4436d09dbe07c9a6c595203d5470" Jan 25 00:20:22 crc kubenswrapper[4947]: I0125 00:20:22.149200 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/2.log" Jan 25 00:20:24 crc kubenswrapper[4947]: I0125 00:20:24.168499 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"12471d0a883816fa7afeaf2a9acf8e7d331b957de65795c4b508903d395b655c"} Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.212882 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" event={"ID":"26019576-c357-40d3-a0aa-8dcc2ad9d1fa","Type":"ContainerStarted","Data":"f5b59d180938d74af0d203991b99c2039627967b3e3ddb702b8b4cc8b954b401"} Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.213317 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.213381 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.213393 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.256836 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.269051 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:20:26 crc kubenswrapper[4947]: I0125 00:20:26.281281 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" podStartSLOduration=7.281262903 podStartE2EDuration="7.281262903s" podCreationTimestamp="2026-01-25 00:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:20:26.276892567 +0000 UTC m=+665.509883027" watchObservedRunningTime="2026-01-25 00:20:26.281262903 +0000 UTC m=+665.514253343" Jan 25 00:20:30 crc kubenswrapper[4947]: I0125 00:20:30.090167 4947 scope.go:117] "RemoveContainer" containerID="c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032" Jan 25 00:20:30 crc kubenswrapper[4947]: E0125 00:20:30.091421 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-9fspn_openshift-multus(2d914454-2c17-47f2-aa53-aba3bfaad296)\"" pod="openshift-multus/multus-9fspn" podUID="2d914454-2c17-47f2-aa53-aba3bfaad296" Jan 25 00:20:44 crc kubenswrapper[4947]: I0125 00:20:44.090297 4947 scope.go:117] "RemoveContainer" containerID="c7e4bdd1edd4881050f59041c1d6812e03c6fe683d1104b228fa6e3da5f11032" Jan 25 00:20:45 crc kubenswrapper[4947]: I0125 00:20:45.347865 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9fspn_2d914454-2c17-47f2-aa53-aba3bfaad296/kube-multus/2.log" Jan 25 00:20:45 crc kubenswrapper[4947]: I0125 00:20:45.348179 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9fspn" event={"ID":"2d914454-2c17-47f2-aa53-aba3bfaad296","Type":"ContainerStarted","Data":"85d7a3877eb8acd8e754ee1a34752e543f29301ad8c351bf0c5981d15ee40ac6"} Jan 25 00:20:47 crc kubenswrapper[4947]: I0125 00:20:47.072816 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:20:47 crc kubenswrapper[4947]: I0125 00:20:47.073274 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:20:49 crc kubenswrapper[4947]: I0125 00:20:49.428503 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-rqxpr" Jan 25 00:21:17 crc kubenswrapper[4947]: I0125 00:21:17.072975 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:21:17 crc kubenswrapper[4947]: I0125 00:21:17.073596 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:21:29 crc kubenswrapper[4947]: I0125 00:21:29.487360 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:21:29 crc kubenswrapper[4947]: I0125 00:21:29.488167 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hwxx4" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="registry-server" containerID="cri-o://e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e" gracePeriod=30 Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.414266 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.448745 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities\") pod \"24ae0891-d29f-45fe-be30-a46f76a39dda\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.448796 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content\") pod \"24ae0891-d29f-45fe-be30-a46f76a39dda\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.448852 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdqff\" (UniqueName: \"kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff\") pod \"24ae0891-d29f-45fe-be30-a46f76a39dda\" (UID: \"24ae0891-d29f-45fe-be30-a46f76a39dda\") " Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.450692 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities" (OuterVolumeSpecName: "utilities") pod "24ae0891-d29f-45fe-be30-a46f76a39dda" (UID: "24ae0891-d29f-45fe-be30-a46f76a39dda"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.458335 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff" (OuterVolumeSpecName: "kube-api-access-hdqff") pod "24ae0891-d29f-45fe-be30-a46f76a39dda" (UID: "24ae0891-d29f-45fe-be30-a46f76a39dda"). InnerVolumeSpecName "kube-api-access-hdqff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.471092 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "24ae0891-d29f-45fe-be30-a46f76a39dda" (UID: "24ae0891-d29f-45fe-be30-a46f76a39dda"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.549972 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.550009 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/24ae0891-d29f-45fe-be30-a46f76a39dda-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.550024 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdqff\" (UniqueName: \"kubernetes.io/projected/24ae0891-d29f-45fe-be30-a46f76a39dda-kube-api-access-hdqff\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.667529 4947 generic.go:334] "Generic (PLEG): container finished" podID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerID="e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e" exitCode=0 Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.667583 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerDied","Data":"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e"} Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.667614 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hwxx4" event={"ID":"24ae0891-d29f-45fe-be30-a46f76a39dda","Type":"ContainerDied","Data":"aea55c0ecb2555c344ea84eb69f61a3ca31d0786e037f7a2f29477da1e97cbae"} Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.667640 4947 scope.go:117] "RemoveContainer" containerID="e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.667784 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hwxx4" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.685204 4947 scope.go:117] "RemoveContainer" containerID="24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.699947 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.701642 4947 scope.go:117] "RemoveContainer" containerID="3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.706490 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hwxx4"] Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.722809 4947 scope.go:117] "RemoveContainer" containerID="e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e" Jan 25 00:21:30 crc kubenswrapper[4947]: E0125 00:21:30.723169 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e\": container with ID starting with e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e not found: ID does not exist" containerID="e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.723204 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e"} err="failed to get container status \"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e\": rpc error: code = NotFound desc = could not find container \"e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e\": container with ID starting with e8ba3522af6a9edf7ea36415a9ce61bd4e6daf8766d9510d0b02847715256d0e not found: ID does not exist" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.723229 4947 scope.go:117] "RemoveContainer" containerID="24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870" Jan 25 00:21:30 crc kubenswrapper[4947]: E0125 00:21:30.723444 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870\": container with ID starting with 24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870 not found: ID does not exist" containerID="24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.723473 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870"} err="failed to get container status \"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870\": rpc error: code = NotFound desc = could not find container \"24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870\": container with ID starting with 24e22aa04c846740854e7340e2b14eef4110af3d206baaf96197adb243fb1870 not found: ID does not exist" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.723490 4947 scope.go:117] "RemoveContainer" containerID="3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97" Jan 25 00:21:30 crc kubenswrapper[4947]: E0125 00:21:30.723736 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97\": container with ID starting with 3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97 not found: ID does not exist" containerID="3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97" Jan 25 00:21:30 crc kubenswrapper[4947]: I0125 00:21:30.723761 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97"} err="failed to get container status \"3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97\": rpc error: code = NotFound desc = could not find container \"3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97\": container with ID starting with 3afef73f532178d1f4ce7234f366893070291c212faf97513ba9f77386409c97 not found: ID does not exist" Jan 25 00:21:31 crc kubenswrapper[4947]: I0125 00:21:31.096159 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" path="/var/lib/kubelet/pods/24ae0891-d29f-45fe-be30-a46f76a39dda/volumes" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.663052 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw"] Jan 25 00:21:33 crc kubenswrapper[4947]: E0125 00:21:33.663655 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="extract-utilities" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.663675 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="extract-utilities" Jan 25 00:21:33 crc kubenswrapper[4947]: E0125 00:21:33.663698 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="registry-server" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.663743 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="registry-server" Jan 25 00:21:33 crc kubenswrapper[4947]: E0125 00:21:33.663770 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="extract-content" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.663785 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="extract-content" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.663957 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="24ae0891-d29f-45fe-be30-a46f76a39dda" containerName="registry-server" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.665120 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.668322 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.682608 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw"] Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.805725 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnm6g\" (UniqueName: \"kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.805799 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.805844 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.906446 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.906733 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.906858 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnm6g\" (UniqueName: \"kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.907730 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.907863 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.944885 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnm6g\" (UniqueName: \"kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:33 crc kubenswrapper[4947]: I0125 00:21:33.986267 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:34 crc kubenswrapper[4947]: I0125 00:21:34.143285 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw"] Jan 25 00:21:34 crc kubenswrapper[4947]: I0125 00:21:34.712724 4947 generic.go:334] "Generic (PLEG): container finished" podID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerID="c344d9d9f22b51f2f10814803290d8766c28975c8ea5704fd4717019915221a6" exitCode=0 Jan 25 00:21:34 crc kubenswrapper[4947]: I0125 00:21:34.712867 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" event={"ID":"e1924f8a-318d-4d3b-ada5-703cf399beed","Type":"ContainerDied","Data":"c344d9d9f22b51f2f10814803290d8766c28975c8ea5704fd4717019915221a6"} Jan 25 00:21:34 crc kubenswrapper[4947]: I0125 00:21:34.713018 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" event={"ID":"e1924f8a-318d-4d3b-ada5-703cf399beed","Type":"ContainerStarted","Data":"7525f14ae3f0ec04e7b4ff6ddaa5361d71583b1bb8588c944a3b51f09c233b98"} Jan 25 00:21:34 crc kubenswrapper[4947]: I0125 00:21:34.718505 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 00:21:36 crc kubenswrapper[4947]: I0125 00:21:36.724965 4947 generic.go:334] "Generic (PLEG): container finished" podID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerID="5d066cdf169265ce54530ce4564da83bbc9d5bccd1ab4f3c3243fde7719295b2" exitCode=0 Jan 25 00:21:36 crc kubenswrapper[4947]: I0125 00:21:36.725183 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" event={"ID":"e1924f8a-318d-4d3b-ada5-703cf399beed","Type":"ContainerDied","Data":"5d066cdf169265ce54530ce4564da83bbc9d5bccd1ab4f3c3243fde7719295b2"} Jan 25 00:21:37 crc kubenswrapper[4947]: I0125 00:21:37.735273 4947 generic.go:334] "Generic (PLEG): container finished" podID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerID="f64ba124f04191e2608b533488cc3957072c8532f826bf85a3496fa6b92d6e34" exitCode=0 Jan 25 00:21:37 crc kubenswrapper[4947]: I0125 00:21:37.735323 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" event={"ID":"e1924f8a-318d-4d3b-ada5-703cf399beed","Type":"ContainerDied","Data":"f64ba124f04191e2608b533488cc3957072c8532f826bf85a3496fa6b92d6e34"} Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.036842 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.175752 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle\") pod \"e1924f8a-318d-4d3b-ada5-703cf399beed\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.175870 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util\") pod \"e1924f8a-318d-4d3b-ada5-703cf399beed\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.176031 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnm6g\" (UniqueName: \"kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g\") pod \"e1924f8a-318d-4d3b-ada5-703cf399beed\" (UID: \"e1924f8a-318d-4d3b-ada5-703cf399beed\") " Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.181082 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle" (OuterVolumeSpecName: "bundle") pod "e1924f8a-318d-4d3b-ada5-703cf399beed" (UID: "e1924f8a-318d-4d3b-ada5-703cf399beed"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.182172 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g" (OuterVolumeSpecName: "kube-api-access-jnm6g") pod "e1924f8a-318d-4d3b-ada5-703cf399beed" (UID: "e1924f8a-318d-4d3b-ada5-703cf399beed"). InnerVolumeSpecName "kube-api-access-jnm6g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.210502 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util" (OuterVolumeSpecName: "util") pod "e1924f8a-318d-4d3b-ada5-703cf399beed" (UID: "e1924f8a-318d-4d3b-ada5-703cf399beed"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.277913 4947 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.277969 4947 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e1924f8a-318d-4d3b-ada5-703cf399beed-util\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.277990 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnm6g\" (UniqueName: \"kubernetes.io/projected/e1924f8a-318d-4d3b-ada5-703cf399beed-kube-api-access-jnm6g\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.750979 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" event={"ID":"e1924f8a-318d-4d3b-ada5-703cf399beed","Type":"ContainerDied","Data":"7525f14ae3f0ec04e7b4ff6ddaa5361d71583b1bb8588c944a3b51f09c233b98"} Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.751388 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7525f14ae3f0ec04e7b4ff6ddaa5361d71583b1bb8588c944a3b51f09c233b98" Jan 25 00:21:39 crc kubenswrapper[4947]: I0125 00:21:39.751095 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.659722 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm"] Jan 25 00:21:42 crc kubenswrapper[4947]: E0125 00:21:42.660411 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="util" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.660435 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="util" Jan 25 00:21:42 crc kubenswrapper[4947]: E0125 00:21:42.660460 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="extract" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.660475 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="extract" Jan 25 00:21:42 crc kubenswrapper[4947]: E0125 00:21:42.660499 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="pull" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.660513 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="pull" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.660694 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1924f8a-318d-4d3b-ada5-703cf399beed" containerName="extract" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.661944 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.668569 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.678263 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm"] Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.722548 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbsfl\" (UniqueName: \"kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.722681 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.722767 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.823585 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.823635 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.823701 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbsfl\" (UniqueName: \"kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.824295 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.824597 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.851382 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbsfl\" (UniqueName: \"kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:42 crc kubenswrapper[4947]: I0125 00:21:42.987440 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.222381 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm"] Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.440639 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk"] Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.442888 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.451731 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk"] Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.536948 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkzg5\" (UniqueName: \"kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.537010 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.537034 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.638255 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkzg5\" (UniqueName: \"kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.638322 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.638349 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.638913 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.638968 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.675095 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkzg5\" (UniqueName: \"kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.779491 4947 generic.go:334] "Generic (PLEG): container finished" podID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerID="cb471fdf3dc7b256e2f9a47919cc149690af1eaa2a6e571e917f85b39c462994" exitCode=0 Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.779569 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" event={"ID":"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9","Type":"ContainerDied","Data":"cb471fdf3dc7b256e2f9a47919cc149690af1eaa2a6e571e917f85b39c462994"} Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.779620 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" event={"ID":"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9","Type":"ContainerStarted","Data":"0bbe8e18431b933dc4c8dd641ad64cbd85e19def267075f50533a7106fa4d7ce"} Jan 25 00:21:43 crc kubenswrapper[4947]: I0125 00:21:43.883025 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:44 crc kubenswrapper[4947]: I0125 00:21:44.140058 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk"] Jan 25 00:21:44 crc kubenswrapper[4947]: W0125 00:21:44.154276 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb18bd971_05aa_4366_8829_6d2db0f3a1a0.slice/crio-630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37 WatchSource:0}: Error finding container 630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37: Status 404 returned error can't find the container with id 630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37 Jan 25 00:21:44 crc kubenswrapper[4947]: I0125 00:21:44.790763 4947 generic.go:334] "Generic (PLEG): container finished" podID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerID="1ad1d20bcbaa8c0f65d732dfbced8a2de80d1ca15d3a0b8903918df3d0e62520" exitCode=0 Jan 25 00:21:44 crc kubenswrapper[4947]: I0125 00:21:44.791035 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" event={"ID":"b18bd971-05aa-4366-8829-6d2db0f3a1a0","Type":"ContainerDied","Data":"1ad1d20bcbaa8c0f65d732dfbced8a2de80d1ca15d3a0b8903918df3d0e62520"} Jan 25 00:21:44 crc kubenswrapper[4947]: I0125 00:21:44.791479 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" event={"ID":"b18bd971-05aa-4366-8829-6d2db0f3a1a0","Type":"ContainerStarted","Data":"630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37"} Jan 25 00:21:45 crc kubenswrapper[4947]: I0125 00:21:45.796859 4947 generic.go:334] "Generic (PLEG): container finished" podID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerID="44fb5b127bacc5cb90d84be224e5039f0b675250aedeebdd106501dd15a0adcf" exitCode=0 Jan 25 00:21:45 crc kubenswrapper[4947]: I0125 00:21:45.796898 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" event={"ID":"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9","Type":"ContainerDied","Data":"44fb5b127bacc5cb90d84be224e5039f0b675250aedeebdd106501dd15a0adcf"} Jan 25 00:21:46 crc kubenswrapper[4947]: I0125 00:21:46.804542 4947 generic.go:334] "Generic (PLEG): container finished" podID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerID="af68368e1bac3ec2c5478bf448f97dc6bb0c2978ab481ae9d20f9945c6208f5b" exitCode=0 Jan 25 00:21:46 crc kubenswrapper[4947]: I0125 00:21:46.804652 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" event={"ID":"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9","Type":"ContainerDied","Data":"af68368e1bac3ec2c5478bf448f97dc6bb0c2978ab481ae9d20f9945c6208f5b"} Jan 25 00:21:46 crc kubenswrapper[4947]: I0125 00:21:46.806791 4947 generic.go:334] "Generic (PLEG): container finished" podID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerID="75619ffeb25c3155dfc3f42378a864e202e00a6989de5b1412dc5e50c007ae6a" exitCode=0 Jan 25 00:21:46 crc kubenswrapper[4947]: I0125 00:21:46.806854 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" event={"ID":"b18bd971-05aa-4366-8829-6d2db0f3a1a0","Type":"ContainerDied","Data":"75619ffeb25c3155dfc3f42378a864e202e00a6989de5b1412dc5e50c007ae6a"} Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.072740 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.073099 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.073171 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.073811 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.073875 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935" gracePeriod=600 Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.814930 4947 generic.go:334] "Generic (PLEG): container finished" podID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerID="9c267f02649b54d162dfe171b89b39fe7c64733394091d4dd29fac4d4acf5c09" exitCode=0 Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.815017 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" event={"ID":"b18bd971-05aa-4366-8829-6d2db0f3a1a0","Type":"ContainerDied","Data":"9c267f02649b54d162dfe171b89b39fe7c64733394091d4dd29fac4d4acf5c09"} Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.818339 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935" exitCode=0 Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.818427 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935"} Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.818519 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39"} Jan 25 00:21:47 crc kubenswrapper[4947]: I0125 00:21:47.818552 4947 scope.go:117] "RemoveContainer" containerID="3aad7ba55a48c39182daae557e3aa5f13d9797554883516debfe73624ee84ab3" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.128671 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.305727 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util\") pod \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.305830 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbsfl\" (UniqueName: \"kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl\") pod \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.305875 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle\") pod \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\" (UID: \"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9\") " Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.306836 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle" (OuterVolumeSpecName: "bundle") pod "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" (UID: "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.315685 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl" (OuterVolumeSpecName: "kube-api-access-bbsfl") pod "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" (UID: "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9"). InnerVolumeSpecName "kube-api-access-bbsfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.331248 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util" (OuterVolumeSpecName: "util") pod "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" (UID: "1cfc506e-cb97-4eb4-a967-d5ea940b5ce9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.407408 4947 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-util\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.407451 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbsfl\" (UniqueName: \"kubernetes.io/projected/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-kube-api-access-bbsfl\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.407462 4947 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1cfc506e-cb97-4eb4-a967-d5ea940b5ce9-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.827923 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" event={"ID":"1cfc506e-cb97-4eb4-a967-d5ea940b5ce9","Type":"ContainerDied","Data":"0bbe8e18431b933dc4c8dd641ad64cbd85e19def267075f50533a7106fa4d7ce"} Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.827973 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bbe8e18431b933dc4c8dd641ad64cbd85e19def267075f50533a7106fa4d7ce" Jan 25 00:21:48 crc kubenswrapper[4947]: I0125 00:21:48.827933 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.141807 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf"] Jan 25 00:21:49 crc kubenswrapper[4947]: E0125 00:21:49.142299 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="extract" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.142312 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="extract" Jan 25 00:21:49 crc kubenswrapper[4947]: E0125 00:21:49.142330 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="util" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.142335 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="util" Jan 25 00:21:49 crc kubenswrapper[4947]: E0125 00:21:49.142343 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="pull" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.142349 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="pull" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.142438 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cfc506e-cb97-4eb4-a967-d5ea940b5ce9" containerName="extract" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.145406 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.159053 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf"] Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.249421 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.318676 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.318742 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tn47\" (UniqueName: \"kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.318769 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421452 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkzg5\" (UniqueName: \"kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5\") pod \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421523 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util\") pod \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421574 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle\") pod \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\" (UID: \"b18bd971-05aa-4366-8829-6d2db0f3a1a0\") " Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421757 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tn47\" (UniqueName: \"kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421793 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.421845 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.422380 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.423542 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle" (OuterVolumeSpecName: "bundle") pod "b18bd971-05aa-4366-8829-6d2db0f3a1a0" (UID: "b18bd971-05aa-4366-8829-6d2db0f3a1a0"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.424117 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.432290 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5" (OuterVolumeSpecName: "kube-api-access-vkzg5") pod "b18bd971-05aa-4366-8829-6d2db0f3a1a0" (UID: "b18bd971-05aa-4366-8829-6d2db0f3a1a0"). InnerVolumeSpecName "kube-api-access-vkzg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.454845 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util" (OuterVolumeSpecName: "util") pod "b18bd971-05aa-4366-8829-6d2db0f3a1a0" (UID: "b18bd971-05aa-4366-8829-6d2db0f3a1a0"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.486818 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tn47\" (UniqueName: \"kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.516136 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.522554 4947 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.522583 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkzg5\" (UniqueName: \"kubernetes.io/projected/b18bd971-05aa-4366-8829-6d2db0f3a1a0-kube-api-access-vkzg5\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.522593 4947 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b18bd971-05aa-4366-8829-6d2db0f3a1a0-util\") on node \"crc\" DevicePath \"\"" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.830170 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf"] Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.835329 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" event={"ID":"b18bd971-05aa-4366-8829-6d2db0f3a1a0","Type":"ContainerDied","Data":"630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37"} Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.835362 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630dccbe3637d72159e610ce692a503238fe899827098bd335c6ff36197beb37" Jan 25 00:21:49 crc kubenswrapper[4947]: I0125 00:21:49.835429 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk" Jan 25 00:21:49 crc kubenswrapper[4947]: W0125 00:21:49.845183 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod373809d6_f72c_4eff_afeb_1fa942bb9e22.slice/crio-ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb WatchSource:0}: Error finding container ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb: Status 404 returned error can't find the container with id ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb Jan 25 00:21:50 crc kubenswrapper[4947]: I0125 00:21:50.857538 4947 generic.go:334] "Generic (PLEG): container finished" podID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerID="25f39926cd905f42a5f17c64be28974859261061e4e417d3c9b8a40d0d2ab729" exitCode=0 Jan 25 00:21:50 crc kubenswrapper[4947]: I0125 00:21:50.857921 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" event={"ID":"373809d6-f72c-4eff-afeb-1fa942bb9e22","Type":"ContainerDied","Data":"25f39926cd905f42a5f17c64be28974859261061e4e417d3c9b8a40d0d2ab729"} Jan 25 00:21:50 crc kubenswrapper[4947]: I0125 00:21:50.858010 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" event={"ID":"373809d6-f72c-4eff-afeb-1fa942bb9e22","Type":"ContainerStarted","Data":"ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb"} Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.799012 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:21:51 crc kubenswrapper[4947]: E0125 00:21:51.799632 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="pull" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.799647 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="pull" Jan 25 00:21:51 crc kubenswrapper[4947]: E0125 00:21:51.799657 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="util" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.799663 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="util" Jan 25 00:21:51 crc kubenswrapper[4947]: E0125 00:21:51.799671 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="extract" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.799680 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="extract" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.799786 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b18bd971-05aa-4366-8829-6d2db0f3a1a0" containerName="extract" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.800484 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.845581 4947 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.878744 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.952764 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkh5r\" (UniqueName: \"kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.952852 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.952878 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.996013 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:21:51 crc kubenswrapper[4947]: I0125 00:21:51.996946 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.010668 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.056715 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkh5r\" (UniqueName: \"kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.056838 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.056871 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.057404 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.057463 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.078275 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkh5r\" (UniqueName: \"kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r\") pod \"certified-operators-rhx28\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.116665 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.158168 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bmsn\" (UniqueName: \"kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.158211 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.158233 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.259322 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bmsn\" (UniqueName: \"kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.259367 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.259399 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.260050 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.260064 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.283797 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bmsn\" (UniqueName: \"kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn\") pod \"redhat-operators-hpxfq\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.315307 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.423715 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.424488 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.429776 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.429855 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-gwlqr" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.430054 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.447850 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.563289 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfrpk\" (UniqueName: \"kubernetes.io/projected/3e662e75-c8ba-4da8-856f-9fc73a2316aa-kube-api-access-cfrpk\") pod \"obo-prometheus-operator-68bc856cb9-wjw4s\" (UID: \"3e662e75-c8ba-4da8-856f-9fc73a2316aa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.566653 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.571415 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.572099 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.575779 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.576068 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-dslsv" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.588545 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.593649 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.593805 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: W0125 00:21:52.605316 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda41a595_7e83_406d_b782_de0adf6e3d8d.slice/crio-cfa4a1c82eb366c0f3d41e29a345c0087d291286eca35ee7c803be1e30f42277 WatchSource:0}: Error finding container cfa4a1c82eb366c0f3d41e29a345c0087d291286eca35ee7c803be1e30f42277: Status 404 returned error can't find the container with id cfa4a1c82eb366c0f3d41e29a345c0087d291286eca35ee7c803be1e30f42277 Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.625448 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.634365 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.667333 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.667401 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.667449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfrpk\" (UniqueName: \"kubernetes.io/projected/3e662e75-c8ba-4da8-856f-9fc73a2316aa-kube-api-access-cfrpk\") pod \"obo-prometheus-operator-68bc856cb9-wjw4s\" (UID: \"3e662e75-c8ba-4da8-856f-9fc73a2316aa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.689710 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfrpk\" (UniqueName: \"kubernetes.io/projected/3e662e75-c8ba-4da8-856f-9fc73a2316aa-kube-api-access-cfrpk\") pod \"obo-prometheus-operator-68bc856cb9-wjw4s\" (UID: \"3e662e75-c8ba-4da8-856f-9fc73a2316aa\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.753832 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.758322 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4v5sm"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.758966 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.761823 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.762015 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-6bvcn" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.771949 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.772036 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.772087 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.772117 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.776515 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.783242 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a3860bf6-f86b-4206-a225-6fa61372a988-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx\" (UID: \"a3860bf6-f86b-4206-a225-6fa61372a988\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.787414 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4v5sm"] Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.873065 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.873618 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3adf01-5529-4edb-9b7f-f3c782156a8d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.873669 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.873694 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfvkj\" (UniqueName: \"kubernetes.io/projected/9d3adf01-5529-4edb-9b7f-f3c782156a8d-kube-api-access-hfvkj\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.879819 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.879931 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ae208ca2-2ac2-4a6a-b88e-127c986f32a5-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k\" (UID: \"ae208ca2-2ac2-4a6a-b88e-127c986f32a5\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.883516 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerStarted","Data":"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8"} Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.883587 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerStarted","Data":"13b192541fa7589b2466360e6399546425f5d80b3b3a89c1761b0ae60a095da2"} Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.889439 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.889632 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerStarted","Data":"318d02fed846a5ed6901b65f31cfc5249873f0176dd5ac1452713156f5ee3ae6"} Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.889760 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerStarted","Data":"cfa4a1c82eb366c0f3d41e29a345c0087d291286eca35ee7c803be1e30f42277"} Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.926504 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.975289 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3adf01-5529-4edb-9b7f-f3c782156a8d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.975403 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfvkj\" (UniqueName: \"kubernetes.io/projected/9d3adf01-5529-4edb-9b7f-f3c782156a8d-kube-api-access-hfvkj\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:52 crc kubenswrapper[4947]: I0125 00:21:52.980369 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/9d3adf01-5529-4edb-9b7f-f3c782156a8d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.001404 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qz44g"] Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.002953 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.009910 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfvkj\" (UniqueName: \"kubernetes.io/projected/9d3adf01-5529-4edb-9b7f-f3c782156a8d-kube-api-access-hfvkj\") pod \"observability-operator-59bdc8b94-4v5sm\" (UID: \"9d3adf01-5529-4edb-9b7f-f3c782156a8d\") " pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.012659 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-h78sp" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.030579 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qz44g"] Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.100406 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.177810 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/38944919-0d65-4fdd-b2bd-2780f8e77bde-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.178398 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv2mn\" (UniqueName: \"kubernetes.io/projected/38944919-0d65-4fdd-b2bd-2780f8e77bde-kube-api-access-bv2mn\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.281708 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv2mn\" (UniqueName: \"kubernetes.io/projected/38944919-0d65-4fdd-b2bd-2780f8e77bde-kube-api-access-bv2mn\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.281793 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/38944919-0d65-4fdd-b2bd-2780f8e77bde-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.282779 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/38944919-0d65-4fdd-b2bd-2780f8e77bde-openshift-service-ca\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.326837 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv2mn\" (UniqueName: \"kubernetes.io/projected/38944919-0d65-4fdd-b2bd-2780f8e77bde-kube-api-access-bv2mn\") pod \"perses-operator-5bf474d74f-qz44g\" (UID: \"38944919-0d65-4fdd-b2bd-2780f8e77bde\") " pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.335063 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s"] Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.341453 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.584624 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx"] Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.620898 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k"] Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.627160 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-4v5sm"] Jan 25 00:21:53 crc kubenswrapper[4947]: W0125 00:21:53.644679 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae208ca2_2ac2_4a6a_b88e_127c986f32a5.slice/crio-99fd7611954c55918f6cf83a4fa39bcb6a224762ca69c3f597655aea7af46ef1 WatchSource:0}: Error finding container 99fd7611954c55918f6cf83a4fa39bcb6a224762ca69c3f597655aea7af46ef1: Status 404 returned error can't find the container with id 99fd7611954c55918f6cf83a4fa39bcb6a224762ca69c3f597655aea7af46ef1 Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.902769 4947 generic.go:334] "Generic (PLEG): container finished" podID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerID="318d02fed846a5ed6901b65f31cfc5249873f0176dd5ac1452713156f5ee3ae6" exitCode=0 Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.902880 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerDied","Data":"318d02fed846a5ed6901b65f31cfc5249873f0176dd5ac1452713156f5ee3ae6"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.910839 4947 generic.go:334] "Generic (PLEG): container finished" podID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerID="9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8" exitCode=0 Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.910928 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerDied","Data":"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.912488 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" event={"ID":"3e662e75-c8ba-4da8-856f-9fc73a2316aa","Type":"ContainerStarted","Data":"925ec9a913eb4eacc0aa9ece9d9bc08723ee8e783934679ef2edbd5c06097b00"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.914653 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" event={"ID":"9d3adf01-5529-4edb-9b7f-f3c782156a8d","Type":"ContainerStarted","Data":"270ad2e21b988d83ce4665713f47d6182fd02f996f87f0310d8cff5873ce78f3"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.915673 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" event={"ID":"ae208ca2-2ac2-4a6a-b88e-127c986f32a5","Type":"ContainerStarted","Data":"99fd7611954c55918f6cf83a4fa39bcb6a224762ca69c3f597655aea7af46ef1"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.916904 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" event={"ID":"a3860bf6-f86b-4206-a225-6fa61372a988","Type":"ContainerStarted","Data":"3e2642cfd0bcb420bbae946fcdc8882c134f65e86f73da36a1f1659b02673f57"} Jan 25 00:21:53 crc kubenswrapper[4947]: I0125 00:21:53.977407 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-qz44g"] Jan 25 00:21:53 crc kubenswrapper[4947]: W0125 00:21:53.988785 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38944919_0d65_4fdd_b2bd_2780f8e77bde.slice/crio-288207986ef4335c4bcc98b5c6c8a5fe2193c346c14f3d2efb886954ecf93d76 WatchSource:0}: Error finding container 288207986ef4335c4bcc98b5c6c8a5fe2193c346c14f3d2efb886954ecf93d76: Status 404 returned error can't find the container with id 288207986ef4335c4bcc98b5c6c8a5fe2193c346c14f3d2efb886954ecf93d76 Jan 25 00:21:54 crc kubenswrapper[4947]: I0125 00:21:54.925835 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" event={"ID":"38944919-0d65-4fdd-b2bd-2780f8e77bde","Type":"ContainerStarted","Data":"288207986ef4335c4bcc98b5c6c8a5fe2193c346c14f3d2efb886954ecf93d76"} Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.789201 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-54ddbf459f-pm6cr"] Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.790304 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.791893 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-65qws" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.792447 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.792992 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.793253 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.825063 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-54ddbf459f-pm6cr"] Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.921778 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v28tm\" (UniqueName: \"kubernetes.io/projected/2f1a951a-1385-42b0-acf1-a549b0edb031-kube-api-access-v28tm\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.922354 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-webhook-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:00 crc kubenswrapper[4947]: I0125 00:22:00.922394 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-apiservice-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.023058 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-apiservice-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.023185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v28tm\" (UniqueName: \"kubernetes.io/projected/2f1a951a-1385-42b0-acf1-a549b0edb031-kube-api-access-v28tm\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.023256 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-webhook-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.035998 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-webhook-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.040945 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f1a951a-1385-42b0-acf1-a549b0edb031-apiservice-cert\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.049112 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v28tm\" (UniqueName: \"kubernetes.io/projected/2f1a951a-1385-42b0-acf1-a549b0edb031-kube-api-access-v28tm\") pod \"elastic-operator-54ddbf459f-pm6cr\" (UID: \"2f1a951a-1385-42b0-acf1-a549b0edb031\") " pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:01 crc kubenswrapper[4947]: I0125 00:22:01.122331 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.510267 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-2zk49"] Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.511657 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-2zk49"] Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.511784 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.517729 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-nwncq" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.591723 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx6qn\" (UniqueName: \"kubernetes.io/projected/fb65215d-4c8c-4191-a224-f49ec8acfaa0-kube-api-access-sx6qn\") pod \"interconnect-operator-5bb49f789d-2zk49\" (UID: \"fb65215d-4c8c-4191-a224-f49ec8acfaa0\") " pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.693170 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sx6qn\" (UniqueName: \"kubernetes.io/projected/fb65215d-4c8c-4191-a224-f49ec8acfaa0-kube-api-access-sx6qn\") pod \"interconnect-operator-5bb49f789d-2zk49\" (UID: \"fb65215d-4c8c-4191-a224-f49ec8acfaa0\") " pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.713993 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sx6qn\" (UniqueName: \"kubernetes.io/projected/fb65215d-4c8c-4191-a224-f49ec8acfaa0-kube-api-access-sx6qn\") pod \"interconnect-operator-5bb49f789d-2zk49\" (UID: \"fb65215d-4c8c-4191-a224-f49ec8acfaa0\") " pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" Jan 25 00:22:05 crc kubenswrapper[4947]: I0125 00:22:05.830559 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" Jan 25 00:22:08 crc kubenswrapper[4947]: W0125 00:22:08.769499 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2f1a951a_1385_42b0_acf1_a549b0edb031.slice/crio-8c94f54fe23f2f506042b577aa7899f4949a9458787409a1007b64d0df33bcf4 WatchSource:0}: Error finding container 8c94f54fe23f2f506042b577aa7899f4949a9458787409a1007b64d0df33bcf4: Status 404 returned error can't find the container with id 8c94f54fe23f2f506042b577aa7899f4949a9458787409a1007b64d0df33bcf4 Jan 25 00:22:08 crc kubenswrapper[4947]: I0125 00:22:08.772653 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-54ddbf459f-pm6cr"] Jan 25 00:22:08 crc kubenswrapper[4947]: I0125 00:22:08.860075 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-2zk49"] Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.095468 4947 generic.go:334] "Generic (PLEG): container finished" podID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerID="fe4a1fe6e51e5ad30cdc321eb4c773de634c37178f521555c52df0019e2fd1ff" exitCode=0 Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.095522 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" event={"ID":"373809d6-f72c-4eff-afeb-1fa942bb9e22","Type":"ContainerDied","Data":"fe4a1fe6e51e5ad30cdc321eb4c773de634c37178f521555c52df0019e2fd1ff"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.098885 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerStarted","Data":"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.100423 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" event={"ID":"3e662e75-c8ba-4da8-856f-9fc73a2316aa","Type":"ContainerStarted","Data":"551ef2892572eb4871537778a088d8934ad6b5f1ae4156f9caaae98ce5af56f6"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.101408 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" event={"ID":"fb65215d-4c8c-4191-a224-f49ec8acfaa0","Type":"ContainerStarted","Data":"5b754e698f956dc3697fd48452f47a9632788ced80623b83bd46f8d26cbb652e"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.102409 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" event={"ID":"9d3adf01-5529-4edb-9b7f-f3c782156a8d","Type":"ContainerStarted","Data":"beaaecb1c8b32812f301fa8e5e9b50d12af967114f2fc0893f9320e72f2d246a"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.102999 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.105394 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.106152 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" event={"ID":"a3860bf6-f86b-4206-a225-6fa61372a988","Type":"ContainerStarted","Data":"02ed1a0c720a38e02733355038992aba24c25bc7af33d943f97286d0c4533a02"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.107938 4947 generic.go:334] "Generic (PLEG): container finished" podID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerID="6cbb445a3f80a4c27484f46896be1c01f1f1240b378c14e2be68c255ab07f7b9" exitCode=0 Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.108008 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerDied","Data":"6cbb445a3f80a4c27484f46896be1c01f1f1240b378c14e2be68c255ab07f7b9"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.111314 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" event={"ID":"38944919-0d65-4fdd-b2bd-2780f8e77bde","Type":"ContainerStarted","Data":"89dc773416d0f3b62a79df4b3cf5746323125f6fafdefd10816de98b604d5c61"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.111511 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.119871 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" event={"ID":"2f1a951a-1385-42b0-acf1-a549b0edb031","Type":"ContainerStarted","Data":"8c94f54fe23f2f506042b577aa7899f4949a9458787409a1007b64d0df33bcf4"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.122639 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" event={"ID":"ae208ca2-2ac2-4a6a-b88e-127c986f32a5","Type":"ContainerStarted","Data":"2096cc33af18c7db5f1d812dd28fbdd42b45054ba2aadbb06368654272887f60"} Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.176496 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-wjw4s" podStartSLOduration=2.308180846 podStartE2EDuration="17.176474428s" podCreationTimestamp="2026-01-25 00:21:52 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.398034483 +0000 UTC m=+752.631024923" lastFinishedPulling="2026-01-25 00:22:08.266328055 +0000 UTC m=+767.499318505" observedRunningTime="2026-01-25 00:22:09.174706034 +0000 UTC m=+768.407696474" watchObservedRunningTime="2026-01-25 00:22:09.176474428 +0000 UTC m=+768.409464878" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.218221 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k" podStartSLOduration=2.5905154120000002 podStartE2EDuration="17.218201479s" podCreationTimestamp="2026-01-25 00:21:52 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.656059935 +0000 UTC m=+752.889050375" lastFinishedPulling="2026-01-25 00:22:08.283745982 +0000 UTC m=+767.516736442" observedRunningTime="2026-01-25 00:22:09.213452635 +0000 UTC m=+768.446443075" watchObservedRunningTime="2026-01-25 00:22:09.218201479 +0000 UTC m=+768.451191919" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.288879 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" podStartSLOduration=3.040099591 podStartE2EDuration="17.288862075s" podCreationTimestamp="2026-01-25 00:21:52 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.991924485 +0000 UTC m=+753.224914925" lastFinishedPulling="2026-01-25 00:22:08.240686959 +0000 UTC m=+767.473677409" observedRunningTime="2026-01-25 00:22:09.28659011 +0000 UTC m=+768.519580550" watchObservedRunningTime="2026-01-25 00:22:09.288862075 +0000 UTC m=+768.521852515" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.307525 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-4v5sm" podStartSLOduration=2.723992075 podStartE2EDuration="17.307509782s" podCreationTimestamp="2026-01-25 00:21:52 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.682235603 +0000 UTC m=+752.915226043" lastFinishedPulling="2026-01-25 00:22:08.26575326 +0000 UTC m=+767.498743750" observedRunningTime="2026-01-25 00:22:09.30530729 +0000 UTC m=+768.538297730" watchObservedRunningTime="2026-01-25 00:22:09.307509782 +0000 UTC m=+768.540500222" Jan 25 00:22:09 crc kubenswrapper[4947]: I0125 00:22:09.364827 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx" podStartSLOduration=2.75461781 podStartE2EDuration="17.364806987s" podCreationTimestamp="2026-01-25 00:21:52 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.655674786 +0000 UTC m=+752.888665226" lastFinishedPulling="2026-01-25 00:22:08.265863953 +0000 UTC m=+767.498854403" observedRunningTime="2026-01-25 00:22:09.361516328 +0000 UTC m=+768.594506768" watchObservedRunningTime="2026-01-25 00:22:09.364806987 +0000 UTC m=+768.597797427" Jan 25 00:22:10 crc kubenswrapper[4947]: I0125 00:22:10.133592 4947 generic.go:334] "Generic (PLEG): container finished" podID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerID="0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0" exitCode=0 Jan 25 00:22:10 crc kubenswrapper[4947]: I0125 00:22:10.133905 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerDied","Data":"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0"} Jan 25 00:22:10 crc kubenswrapper[4947]: I0125 00:22:10.139619 4947 generic.go:334] "Generic (PLEG): container finished" podID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerID="5308d7d0366e6f92031ccebe149d4fb138248b027eeea8f45d458990d0afa9b4" exitCode=0 Jan 25 00:22:10 crc kubenswrapper[4947]: I0125 00:22:10.139680 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" event={"ID":"373809d6-f72c-4eff-afeb-1fa942bb9e22","Type":"ContainerDied","Data":"5308d7d0366e6f92031ccebe149d4fb138248b027eeea8f45d458990d0afa9b4"} Jan 25 00:22:10 crc kubenswrapper[4947]: I0125 00:22:10.142407 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerStarted","Data":"101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2"} Jan 25 00:22:11 crc kubenswrapper[4947]: I0125 00:22:11.125069 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rhx28" podStartSLOduration=4.480411677 podStartE2EDuration="20.12504854s" podCreationTimestamp="2026-01-25 00:21:51 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.904858356 +0000 UTC m=+753.137848796" lastFinishedPulling="2026-01-25 00:22:09.549495229 +0000 UTC m=+768.782485659" observedRunningTime="2026-01-25 00:22:10.205412881 +0000 UTC m=+769.438403321" watchObservedRunningTime="2026-01-25 00:22:11.12504854 +0000 UTC m=+770.358038980" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.117151 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.118249 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.157484 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.186008 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" event={"ID":"373809d6-f72c-4eff-afeb-1fa942bb9e22","Type":"ContainerDied","Data":"ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb"} Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.186072 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ade22be7624bdefe70bef3c545d6a1213dc871588b1a8aea09e49b8ce71787cb" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.186033 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.187207 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.212915 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tn47\" (UniqueName: \"kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47\") pod \"373809d6-f72c-4eff-afeb-1fa942bb9e22\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.212996 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle\") pod \"373809d6-f72c-4eff-afeb-1fa942bb9e22\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.213104 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util\") pod \"373809d6-f72c-4eff-afeb-1fa942bb9e22\" (UID: \"373809d6-f72c-4eff-afeb-1fa942bb9e22\") " Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.217293 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle" (OuterVolumeSpecName: "bundle") pod "373809d6-f72c-4eff-afeb-1fa942bb9e22" (UID: "373809d6-f72c-4eff-afeb-1fa942bb9e22"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.229416 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47" (OuterVolumeSpecName: "kube-api-access-4tn47") pod "373809d6-f72c-4eff-afeb-1fa942bb9e22" (UID: "373809d6-f72c-4eff-afeb-1fa942bb9e22"). InnerVolumeSpecName "kube-api-access-4tn47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.247248 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util" (OuterVolumeSpecName: "util") pod "373809d6-f72c-4eff-afeb-1fa942bb9e22" (UID: "373809d6-f72c-4eff-afeb-1fa942bb9e22"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.314715 4947 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-util\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.314748 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tn47\" (UniqueName: \"kubernetes.io/projected/373809d6-f72c-4eff-afeb-1fa942bb9e22-kube-api-access-4tn47\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:12 crc kubenswrapper[4947]: I0125 00:22:12.314758 4947 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/373809d6-f72c-4eff-afeb-1fa942bb9e22-bundle\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.215732 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerStarted","Data":"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf"} Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.219201 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" event={"ID":"2f1a951a-1385-42b0-acf1-a549b0edb031","Type":"ContainerStarted","Data":"372e069faa69d0ce3a9062a84433192115fe716146663a353a70589654e4122c"} Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.270238 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hpxfq" podStartSLOduration=3.995618243 podStartE2EDuration="22.270216983s" podCreationTimestamp="2026-01-25 00:21:51 +0000 UTC" firstStartedPulling="2026-01-25 00:21:53.912832076 +0000 UTC m=+753.145822516" lastFinishedPulling="2026-01-25 00:22:12.187430806 +0000 UTC m=+771.420421256" observedRunningTime="2026-01-25 00:22:13.267593959 +0000 UTC m=+772.500584399" watchObservedRunningTime="2026-01-25 00:22:13.270216983 +0000 UTC m=+772.503207423" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.310623 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-54ddbf459f-pm6cr" podStartSLOduration=9.88540953 podStartE2EDuration="13.310597611s" podCreationTimestamp="2026-01-25 00:22:00 +0000 UTC" firstStartedPulling="2026-01-25 00:22:08.774970341 +0000 UTC m=+768.007960781" lastFinishedPulling="2026-01-25 00:22:12.200158402 +0000 UTC m=+771.433148862" observedRunningTime="2026-01-25 00:22:13.3076575 +0000 UTC m=+772.540647960" watchObservedRunningTime="2026-01-25 00:22:13.310597611 +0000 UTC m=+772.543588051" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.344491 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-qz44g" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.519635 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 25 00:22:13 crc kubenswrapper[4947]: E0125 00:22:13.520042 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="pull" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.520072 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="pull" Jan 25 00:22:13 crc kubenswrapper[4947]: E0125 00:22:13.520106 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="util" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.520117 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="util" Jan 25 00:22:13 crc kubenswrapper[4947]: E0125 00:22:13.520156 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="extract" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.520170 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="extract" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.520291 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="373809d6-f72c-4eff-afeb-1fa942bb9e22" containerName="extract" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.521285 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.531336 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.531477 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.531460 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.532698 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-hzxwm" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.534380 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.534437 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.542560 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.542884 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.543040 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.550569 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631347 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631392 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631428 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631447 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631479 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631544 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631636 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631690 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631712 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631782 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631812 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631840 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631856 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631883 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.631924 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733117 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733213 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733240 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733266 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733296 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733316 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733372 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733412 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733436 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733460 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733485 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733525 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733549 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733577 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.733598 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.737641 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.737948 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.738201 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.738415 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.740416 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.740881 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.741698 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.742039 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.742367 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.742386 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.742693 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.742983 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.743185 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.747821 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.747940 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:13 crc kubenswrapper[4947]: I0125 00:22:13.844457 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:14 crc kubenswrapper[4947]: I0125 00:22:14.281265 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:17 crc kubenswrapper[4947]: I0125 00:22:17.184616 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:22:17 crc kubenswrapper[4947]: I0125 00:22:17.185075 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rhx28" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="registry-server" containerID="cri-o://101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" gracePeriod=2 Jan 25 00:22:19 crc kubenswrapper[4947]: I0125 00:22:19.263376 4947 generic.go:334] "Generic (PLEG): container finished" podID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerID="101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" exitCode=0 Jan 25 00:22:19 crc kubenswrapper[4947]: I0125 00:22:19.263420 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerDied","Data":"101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2"} Jan 25 00:22:22 crc kubenswrapper[4947]: E0125 00:22:22.118592 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2 is running failed: container process not found" containerID="101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 00:22:22 crc kubenswrapper[4947]: E0125 00:22:22.119176 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2 is running failed: container process not found" containerID="101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 00:22:22 crc kubenswrapper[4947]: E0125 00:22:22.119487 4947 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2 is running failed: container process not found" containerID="101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" cmd=["grpc_health_probe","-addr=:50051"] Jan 25 00:22:22 crc kubenswrapper[4947]: E0125 00:22:22.119517 4947 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-rhx28" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="registry-server" Jan 25 00:22:22 crc kubenswrapper[4947]: I0125 00:22:22.316951 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:22 crc kubenswrapper[4947]: I0125 00:22:22.317207 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:22 crc kubenswrapper[4947]: I0125 00:22:22.357663 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:23 crc kubenswrapper[4947]: I0125 00:22:23.331968 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:25 crc kubenswrapper[4947]: E0125 00:22:25.768218 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43" Jan 25 00:22:25 crc kubenswrapper[4947]: E0125 00:22:25.768534 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:interconnect-operator,Image:registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43,Command:[qdr-operator],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:60000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:qdr-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_QDROUTERD_IMAGE,Value:registry.redhat.io/amq7/amq-interconnect@sha256:31d87473fa684178a694f9ee331d3c80f2653f9533cb65c2a325752166a077e9,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:amq7-interconnect-operator.v1.10.20,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sx6qn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod interconnect-operator-5bb49f789d-2zk49_service-telemetry(fb65215d-4c8c-4191-a224-f49ec8acfaa0): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 25 00:22:25 crc kubenswrapper[4947]: E0125 00:22:25.769836 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" podUID="fb65215d-4c8c-4191-a224-f49ec8acfaa0" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.033299 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.042020 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 25 00:22:26 crc kubenswrapper[4947]: W0125 00:22:26.048631 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac6cbdf5_f2a1_4e0a_90cb_2d97e1caa9a6.slice/crio-2e7862ec85f25af6665e553c687980f94911a639a9374e04e8d2578a951dd21d WatchSource:0}: Error finding container 2e7862ec85f25af6665e553c687980f94911a639a9374e04e8d2578a951dd21d: Status 404 returned error can't find the container with id 2e7862ec85f25af6665e553c687980f94911a639a9374e04e8d2578a951dd21d Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.175279 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkh5r\" (UniqueName: \"kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r\") pod \"da41a595-7e83-406d-b782-de0adf6e3d8d\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.175394 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content\") pod \"da41a595-7e83-406d-b782-de0adf6e3d8d\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.175442 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities\") pod \"da41a595-7e83-406d-b782-de0adf6e3d8d\" (UID: \"da41a595-7e83-406d-b782-de0adf6e3d8d\") " Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.176476 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities" (OuterVolumeSpecName: "utilities") pod "da41a595-7e83-406d-b782-de0adf6e3d8d" (UID: "da41a595-7e83-406d-b782-de0adf6e3d8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.196942 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r" (OuterVolumeSpecName: "kube-api-access-gkh5r") pod "da41a595-7e83-406d-b782-de0adf6e3d8d" (UID: "da41a595-7e83-406d-b782-de0adf6e3d8d"). InnerVolumeSpecName "kube-api-access-gkh5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.218888 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da41a595-7e83-406d-b782-de0adf6e3d8d" (UID: "da41a595-7e83-406d-b782-de0adf6e3d8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.276815 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.276850 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da41a595-7e83-406d-b782-de0adf6e3d8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.276860 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkh5r\" (UniqueName: \"kubernetes.io/projected/da41a595-7e83-406d-b782-de0adf6e3d8d-kube-api-access-gkh5r\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.300407 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6","Type":"ContainerStarted","Data":"2e7862ec85f25af6665e553c687980f94911a639a9374e04e8d2578a951dd21d"} Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.303509 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rhx28" event={"ID":"da41a595-7e83-406d-b782-de0adf6e3d8d","Type":"ContainerDied","Data":"cfa4a1c82eb366c0f3d41e29a345c0087d291286eca35ee7c803be1e30f42277"} Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.303553 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rhx28" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.303559 4947 scope.go:117] "RemoveContainer" containerID="101a2d9515c68f6c70d7d549acf06d7bac932451d00ba8d176ceee4a6c10a7d2" Jan 25 00:22:26 crc kubenswrapper[4947]: E0125 00:22:26.305591 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"interconnect-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/amq7/amq-interconnect-operator@sha256:a8b621237c872ded2a1d1d948fbebd693429e4a1ced1d7922406241a078d3d43\\\"\"" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" podUID="fb65215d-4c8c-4191-a224-f49ec8acfaa0" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.347117 4947 scope.go:117] "RemoveContainer" containerID="6cbb445a3f80a4c27484f46896be1c01f1f1240b378c14e2be68c255ab07f7b9" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.364524 4947 scope.go:117] "RemoveContainer" containerID="318d02fed846a5ed6901b65f31cfc5249873f0176dd5ac1452713156f5ee3ae6" Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.374287 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.377215 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rhx28"] Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.585686 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.585973 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hpxfq" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="registry-server" containerID="cri-o://e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf" gracePeriod=2 Jan 25 00:22:26 crc kubenswrapper[4947]: I0125 00:22:26.939821 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.087951 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bmsn\" (UniqueName: \"kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn\") pod \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.088315 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content\") pod \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.088471 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities\") pod \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\" (UID: \"c8e8b07d-e9e8-4efd-a05d-f09f78abca00\") " Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.091324 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities" (OuterVolumeSpecName: "utilities") pod "c8e8b07d-e9e8-4efd-a05d-f09f78abca00" (UID: "c8e8b07d-e9e8-4efd-a05d-f09f78abca00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.093762 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn" (OuterVolumeSpecName: "kube-api-access-9bmsn") pod "c8e8b07d-e9e8-4efd-a05d-f09f78abca00" (UID: "c8e8b07d-e9e8-4efd-a05d-f09f78abca00"). InnerVolumeSpecName "kube-api-access-9bmsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.097675 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" path="/var/lib/kubelet/pods/da41a595-7e83-406d-b782-de0adf6e3d8d/volumes" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.192361 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.192437 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bmsn\" (UniqueName: \"kubernetes.io/projected/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-kube-api-access-9bmsn\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.223282 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8e8b07d-e9e8-4efd-a05d-f09f78abca00" (UID: "c8e8b07d-e9e8-4efd-a05d-f09f78abca00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.294238 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e8b07d-e9e8-4efd-a05d-f09f78abca00-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.312091 4947 generic.go:334] "Generic (PLEG): container finished" podID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerID="e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf" exitCode=0 Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.312195 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerDied","Data":"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf"} Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.312228 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hpxfq" event={"ID":"c8e8b07d-e9e8-4efd-a05d-f09f78abca00","Type":"ContainerDied","Data":"13b192541fa7589b2466360e6399546425f5d80b3b3a89c1761b0ae60a095da2"} Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.312244 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hpxfq" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.312249 4947 scope.go:117] "RemoveContainer" containerID="e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.326384 4947 scope.go:117] "RemoveContainer" containerID="0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.340055 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.344508 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hpxfq"] Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.365445 4947 scope.go:117] "RemoveContainer" containerID="9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.381896 4947 scope.go:117] "RemoveContainer" containerID="e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.382541 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf\": container with ID starting with e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf not found: ID does not exist" containerID="e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.382585 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf"} err="failed to get container status \"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf\": rpc error: code = NotFound desc = could not find container \"e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf\": container with ID starting with e4836fdef9e34e6f13ac541e3aaade10488ce09dbcca0800df64b539d7789fdf not found: ID does not exist" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.382614 4947 scope.go:117] "RemoveContainer" containerID="0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.382928 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0\": container with ID starting with 0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0 not found: ID does not exist" containerID="0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.382967 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0"} err="failed to get container status \"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0\": rpc error: code = NotFound desc = could not find container \"0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0\": container with ID starting with 0ad61d18978a538cad1724a2be6e708c3afaaaafc7b49772ebcaa70bca997dc0 not found: ID does not exist" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.382994 4947 scope.go:117] "RemoveContainer" containerID="9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.383387 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8\": container with ID starting with 9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8 not found: ID does not exist" containerID="9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.383420 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8"} err="failed to get container status \"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8\": rpc error: code = NotFound desc = could not find container \"9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8\": container with ID starting with 9b30e7b5190fa58e43a00f9e75f36dc5d963193ed7c3423d9800f1d9ccff24a8 not found: ID does not exist" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.655807 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc"] Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656014 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656025 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656034 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="extract-utilities" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656040 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="extract-utilities" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656051 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656059 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656068 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="extract-utilities" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656074 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="extract-utilities" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656086 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="extract-content" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656091 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="extract-content" Jan 25 00:22:27 crc kubenswrapper[4947]: E0125 00:22:27.656101 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="extract-content" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656107 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="extract-content" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656205 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656224 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="da41a595-7e83-406d-b782-de0adf6e3d8d" containerName="registry-server" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.656577 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.659915 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.660569 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.660780 4947 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-77nl2" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.671456 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc"] Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.799733 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqt4x\" (UniqueName: \"kubernetes.io/projected/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-kube-api-access-lqt4x\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.799830 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.900860 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.900932 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqt4x\" (UniqueName: \"kubernetes.io/projected/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-kube-api-access-lqt4x\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.901406 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:27 crc kubenswrapper[4947]: I0125 00:22:27.920550 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqt4x\" (UniqueName: \"kubernetes.io/projected/4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed-kube-api-access-lqt4x\") pod \"cert-manager-operator-controller-manager-5446d6888b-dlmlc\" (UID: \"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:28 crc kubenswrapper[4947]: I0125 00:22:28.017286 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" Jan 25 00:22:29 crc kubenswrapper[4947]: I0125 00:22:29.140075 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e8b07d-e9e8-4efd-a05d-f09f78abca00" path="/var/lib/kubelet/pods/c8e8b07d-e9e8-4efd-a05d-f09f78abca00/volumes" Jan 25 00:22:30 crc kubenswrapper[4947]: I0125 00:22:30.139564 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc"] Jan 25 00:22:30 crc kubenswrapper[4947]: W0125 00:22:30.154244 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fe8724a_3a6a_4130_9d5b_e2e3be8d30ed.slice/crio-11f3e96ba64717225b6f67defcec6058ee830d7655b180ecc8af90a8330b9769 WatchSource:0}: Error finding container 11f3e96ba64717225b6f67defcec6058ee830d7655b180ecc8af90a8330b9769: Status 404 returned error can't find the container with id 11f3e96ba64717225b6f67defcec6058ee830d7655b180ecc8af90a8330b9769 Jan 25 00:22:30 crc kubenswrapper[4947]: I0125 00:22:30.350425 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" event={"ID":"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed","Type":"ContainerStarted","Data":"11f3e96ba64717225b6f67defcec6058ee830d7655b180ecc8af90a8330b9769"} Jan 25 00:22:38 crc kubenswrapper[4947]: I0125 00:22:38.414702 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" event={"ID":"4fe8724a-3a6a-4130-9d5b-e2e3be8d30ed","Type":"ContainerStarted","Data":"573b6d1c112bd77306e078ee314e3fb6e8d6b6016410c9a9201a722d482f7ab0"} Jan 25 00:22:38 crc kubenswrapper[4947]: I0125 00:22:38.431394 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-dlmlc" podStartSLOduration=3.353529467 podStartE2EDuration="11.431372858s" podCreationTimestamp="2026-01-25 00:22:27 +0000 UTC" firstStartedPulling="2026-01-25 00:22:30.157500836 +0000 UTC m=+789.390491266" lastFinishedPulling="2026-01-25 00:22:38.235344207 +0000 UTC m=+797.468334657" observedRunningTime="2026-01-25 00:22:38.429382991 +0000 UTC m=+797.662373431" watchObservedRunningTime="2026-01-25 00:22:38.431372858 +0000 UTC m=+797.664363298" Jan 25 00:22:39 crc kubenswrapper[4947]: I0125 00:22:39.423678 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" event={"ID":"fb65215d-4c8c-4191-a224-f49ec8acfaa0","Type":"ContainerStarted","Data":"3932ce9a7a1ad50fc9ed55a9e3e651a626810a513407886aae25c87f11a62988"} Jan 25 00:22:39 crc kubenswrapper[4947]: I0125 00:22:39.426571 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6","Type":"ContainerStarted","Data":"389df8f63a5429aa3e6c47d509692bd8ead951a42c2a2550fc4278810ebd4d4e"} Jan 25 00:22:39 crc kubenswrapper[4947]: I0125 00:22:39.444177 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-2zk49" podStartSLOduration=4.999585345 podStartE2EDuration="34.444162529s" podCreationTimestamp="2026-01-25 00:22:05 +0000 UTC" firstStartedPulling="2026-01-25 00:22:08.918830024 +0000 UTC m=+768.151820464" lastFinishedPulling="2026-01-25 00:22:38.363407208 +0000 UTC m=+797.596397648" observedRunningTime="2026-01-25 00:22:39.441973997 +0000 UTC m=+798.674964467" watchObservedRunningTime="2026-01-25 00:22:39.444162529 +0000 UTC m=+798.677152969" Jan 25 00:22:39 crc kubenswrapper[4947]: I0125 00:22:39.613117 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 25 00:22:39 crc kubenswrapper[4947]: I0125 00:22:39.631997 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 25 00:22:41 crc kubenswrapper[4947]: I0125 00:22:41.437073 4947 generic.go:334] "Generic (PLEG): container finished" podID="ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6" containerID="389df8f63a5429aa3e6c47d509692bd8ead951a42c2a2550fc4278810ebd4d4e" exitCode=0 Jan 25 00:22:41 crc kubenswrapper[4947]: I0125 00:22:41.437159 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6","Type":"ContainerDied","Data":"389df8f63a5429aa3e6c47d509692bd8ead951a42c2a2550fc4278810ebd4d4e"} Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.019913 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-wqxxr"] Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.020862 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.025280 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.025526 4947 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-468lq" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.026946 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.033222 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpbcm\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-kube-api-access-qpbcm\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.033312 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.034852 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-wqxxr"] Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.134091 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpbcm\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-kube-api-access-qpbcm\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.134178 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.152720 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.153192 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpbcm\" (UniqueName: \"kubernetes.io/projected/d860ec8b-2f41-4b81-8868-9b078b55b341-kube-api-access-qpbcm\") pod \"cert-manager-webhook-f4fb5df64-wqxxr\" (UID: \"d860ec8b-2f41-4b81-8868-9b078b55b341\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.336695 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.488948 4947 generic.go:334] "Generic (PLEG): container finished" podID="ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6" containerID="7934a5f9bb6f819c97b501ceddb942047b6631a4004801a28c266ec0d76fc45f" exitCode=0 Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.488996 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6","Type":"ContainerDied","Data":"7934a5f9bb6f819c97b501ceddb942047b6631a4004801a28c266ec0d76fc45f"} Jan 25 00:22:42 crc kubenswrapper[4947]: I0125 00:22:42.620297 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-wqxxr"] Jan 25 00:22:43 crc kubenswrapper[4947]: I0125 00:22:43.495050 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" event={"ID":"d860ec8b-2f41-4b81-8868-9b078b55b341","Type":"ContainerStarted","Data":"f9b6b38877a9fb096906b55e70ad4a760f4b259844414f83e1764d189833a252"} Jan 25 00:22:43 crc kubenswrapper[4947]: I0125 00:22:43.499487 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6","Type":"ContainerStarted","Data":"ed65abcb107b3b17187ba899baf6afa900a6101df6f3bed2237909312155847d"} Jan 25 00:22:43 crc kubenswrapper[4947]: I0125 00:22:43.499649 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:22:43 crc kubenswrapper[4947]: I0125 00:22:43.548804 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=18.138792732 podStartE2EDuration="30.548776356s" podCreationTimestamp="2026-01-25 00:22:13 +0000 UTC" firstStartedPulling="2026-01-25 00:22:26.050647696 +0000 UTC m=+785.283638136" lastFinishedPulling="2026-01-25 00:22:38.46063131 +0000 UTC m=+797.693621760" observedRunningTime="2026-01-25 00:22:43.547747081 +0000 UTC m=+802.780737531" watchObservedRunningTime="2026-01-25 00:22:43.548776356 +0000 UTC m=+802.781766786" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.160696 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz"] Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.161796 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.164645 4947 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-2g48r" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.180089 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz"] Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.249725 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.249914 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9jjb\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-kube-api-access-z9jjb\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.351100 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9jjb\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-kube-api-access-z9jjb\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.351180 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.374392 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.375438 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9jjb\" (UniqueName: \"kubernetes.io/projected/c215f860-08a3-4dbd-b7f2-426286319aa8-kube-api-access-z9jjb\") pod \"cert-manager-cainjector-855d9ccff4-cqxvz\" (UID: \"c215f860-08a3-4dbd-b7f2-426286319aa8\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:45 crc kubenswrapper[4947]: I0125 00:22:45.482803 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" Jan 25 00:22:46 crc kubenswrapper[4947]: I0125 00:22:46.189238 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz"] Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.486169 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.489713 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.491535 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.491730 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.491977 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.493344 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.512982 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606229 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606272 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606318 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606346 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606370 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606428 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606458 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606484 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606502 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606618 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606748 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bgfk\" (UniqueName: \"kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.606823 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709017 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709116 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709172 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709205 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709228 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709246 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709339 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709439 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709465 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709519 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709563 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709618 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bgfk\" (UniqueName: \"kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709706 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709866 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.709885 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710035 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710326 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710376 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710422 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710689 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.710802 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.723140 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.730548 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bgfk\" (UniqueName: \"kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.737121 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:50 crc kubenswrapper[4947]: I0125 00:22:50.814918 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.581409 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.587799 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"661d7c06-4a71-4c19-8fa1-bdca787b20c1","Type":"ContainerStarted","Data":"87586fe90c2425fe7d5e7682b0325feea1e33f7914939e906cffe4b6a4e486d3"} Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.591819 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" event={"ID":"d860ec8b-2f41-4b81-8868-9b078b55b341","Type":"ContainerStarted","Data":"c834bdca81785468ff5ec19b7492380d39d838d2f05f6330f36e8e7c57d3960a"} Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.592458 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.593467 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" event={"ID":"c215f860-08a3-4dbd-b7f2-426286319aa8","Type":"ContainerStarted","Data":"5e580cfa8b48f9f1f211860e22b8fc368c1b91dbfe0cf5a31c8623b8758686ee"} Jan 25 00:22:52 crc kubenswrapper[4947]: I0125 00:22:52.607617 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" podStartSLOduration=0.852274381 podStartE2EDuration="10.607595296s" podCreationTimestamp="2026-01-25 00:22:42 +0000 UTC" firstStartedPulling="2026-01-25 00:22:42.629717323 +0000 UTC m=+801.862707763" lastFinishedPulling="2026-01-25 00:22:52.385038238 +0000 UTC m=+811.618028678" observedRunningTime="2026-01-25 00:22:52.605225289 +0000 UTC m=+811.838215739" watchObservedRunningTime="2026-01-25 00:22:52.607595296 +0000 UTC m=+811.840585736" Jan 25 00:22:53 crc kubenswrapper[4947]: I0125 00:22:53.600648 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" event={"ID":"c215f860-08a3-4dbd-b7f2-426286319aa8","Type":"ContainerStarted","Data":"8240170dd5cb499d26dc4e187db57d057e181984ec0210b4b6553f70f73aaa2b"} Jan 25 00:22:53 crc kubenswrapper[4947]: I0125 00:22:53.945723 4947 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6" containerName="elasticsearch" probeResult="failure" output=< Jan 25 00:22:53 crc kubenswrapper[4947]: {"timestamp": "2026-01-25T00:22:53+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 25 00:22:53 crc kubenswrapper[4947]: > Jan 25 00:22:57 crc kubenswrapper[4947]: I0125 00:22:57.343393 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-wqxxr" Jan 25 00:22:57 crc kubenswrapper[4947]: I0125 00:22:57.367159 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-cqxvz" podStartSLOduration=11.672482 podStartE2EDuration="12.36711685s" podCreationTimestamp="2026-01-25 00:22:45 +0000 UTC" firstStartedPulling="2026-01-25 00:22:52.26458871 +0000 UTC m=+811.497579150" lastFinishedPulling="2026-01-25 00:22:52.95922353 +0000 UTC m=+812.192214000" observedRunningTime="2026-01-25 00:22:53.627214401 +0000 UTC m=+812.860204851" watchObservedRunningTime="2026-01-25 00:22:57.36711685 +0000 UTC m=+816.600107290" Jan 25 00:22:59 crc kubenswrapper[4947]: I0125 00:22:59.278322 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.651033 4947 generic.go:334] "Generic (PLEG): container finished" podID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerID="29e25cd653e5646a037201713219653ddf735f00af3b32cacab158f9791bc90a" exitCode=0 Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.651160 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"661d7c06-4a71-4c19-8fa1-bdca787b20c1","Type":"ContainerDied","Data":"29e25cd653e5646a037201713219653ddf735f00af3b32cacab158f9791bc90a"} Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.953011 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.976936 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-tgcft"] Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.978321 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.980874 4947 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-g9rj2" Jan 25 00:23:00 crc kubenswrapper[4947]: I0125 00:23:00.988315 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-tgcft"] Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.075750 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6k72\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-kube-api-access-q6k72\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.075877 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-bound-sa-token\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.177524 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6k72\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-kube-api-access-q6k72\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.179487 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-bound-sa-token\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.201963 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6k72\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-kube-api-access-q6k72\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.202089 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6beb1442-5e99-4164-8077-50d6eb5dbd44-bound-sa-token\") pod \"cert-manager-86cb77c54b-tgcft\" (UID: \"6beb1442-5e99-4164-8077-50d6eb5dbd44\") " pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.309879 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-tgcft" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.664994 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"661d7c06-4a71-4c19-8fa1-bdca787b20c1","Type":"ContainerStarted","Data":"42d0401113a9cc459f1654c1224909ba635689103f9bf3566db1b01252300549"} Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.665281 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="docker-build" containerID="cri-o://42d0401113a9cc459f1654c1224909ba635689103f9bf3566db1b01252300549" gracePeriod=30 Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.705676 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-1-build" podStartSLOduration=4.387894255 podStartE2EDuration="11.705655747s" podCreationTimestamp="2026-01-25 00:22:50 +0000 UTC" firstStartedPulling="2026-01-25 00:22:52.582987976 +0000 UTC m=+811.815978416" lastFinishedPulling="2026-01-25 00:22:59.900749468 +0000 UTC m=+819.133739908" observedRunningTime="2026-01-25 00:23:01.70200722 +0000 UTC m=+820.934997660" watchObservedRunningTime="2026-01-25 00:23:01.705655747 +0000 UTC m=+820.938646197" Jan 25 00:23:01 crc kubenswrapper[4947]: I0125 00:23:01.829991 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-tgcft"] Jan 25 00:23:01 crc kubenswrapper[4947]: W0125 00:23:01.841147 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6beb1442_5e99_4164_8077_50d6eb5dbd44.slice/crio-207b9082d9fd6b34ea2385d5a82c34ec6a09602b0c9b39bde3ad7bff4325ee00 WatchSource:0}: Error finding container 207b9082d9fd6b34ea2385d5a82c34ec6a09602b0c9b39bde3ad7bff4325ee00: Status 404 returned error can't find the container with id 207b9082d9fd6b34ea2385d5a82c34ec6a09602b0c9b39bde3ad7bff4325ee00 Jan 25 00:23:02 crc kubenswrapper[4947]: I0125 00:23:02.671978 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-tgcft" event={"ID":"6beb1442-5e99-4164-8077-50d6eb5dbd44","Type":"ContainerStarted","Data":"207b9082d9fd6b34ea2385d5a82c34ec6a09602b0c9b39bde3ad7bff4325ee00"} Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.125066 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.126519 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.129583 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.129626 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.130063 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.167601 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.209737 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.209812 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.209922 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.209984 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210008 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210035 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210066 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210086 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210147 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210221 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210257 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.210304 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc8cj\" (UniqueName: \"kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312152 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312233 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312288 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312328 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312382 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312410 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312473 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312500 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312570 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc8cj\" (UniqueName: \"kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312632 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312701 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312739 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.312828 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.313539 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.313730 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.313743 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.313799 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.314023 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.314086 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.314668 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.314874 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.319949 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.321251 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.338053 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc8cj\" (UniqueName: \"kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj\") pod \"service-telemetry-operator-2-build\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.450266 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:23:03 crc kubenswrapper[4947]: I0125 00:23:03.959149 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 25 00:23:04 crc kubenswrapper[4947]: I0125 00:23:04.687930 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerStarted","Data":"57b1451538b238472d5680ea17ac3804ec620b15075d2890a204aa83384bdd41"} Jan 25 00:23:09 crc kubenswrapper[4947]: I0125 00:23:09.430739 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_661d7c06-4a71-4c19-8fa1-bdca787b20c1/docker-build/0.log" Jan 25 00:23:09 crc kubenswrapper[4947]: I0125 00:23:09.431852 4947 generic.go:334] "Generic (PLEG): container finished" podID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerID="42d0401113a9cc459f1654c1224909ba635689103f9bf3566db1b01252300549" exitCode=-1 Jan 25 00:23:09 crc kubenswrapper[4947]: I0125 00:23:09.431893 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"661d7c06-4a71-4c19-8fa1-bdca787b20c1","Type":"ContainerDied","Data":"42d0401113a9cc459f1654c1224909ba635689103f9bf3566db1b01252300549"} Jan 25 00:23:12 crc kubenswrapper[4947]: I0125 00:23:12.455496 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerStarted","Data":"21824eaf8102a5c1782785b66dfb26808429386a84873ff0bff3535456020c43"} Jan 25 00:23:12 crc kubenswrapper[4947]: I0125 00:23:12.462791 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-tgcft" event={"ID":"6beb1442-5e99-4164-8077-50d6eb5dbd44","Type":"ContainerStarted","Data":"ac5038159970cbc4930dbdf37643e6b49b70d5c771b838917117990074ca16be"} Jan 25 00:23:12 crc kubenswrapper[4947]: I0125 00:23:12.510309 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-tgcft" podStartSLOduration=12.510278519 podStartE2EDuration="12.510278519s" podCreationTimestamp="2026-01-25 00:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:23:12.482150684 +0000 UTC m=+831.715141134" watchObservedRunningTime="2026-01-25 00:23:12.510278519 +0000 UTC m=+831.743268959" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.584834 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_661d7c06-4a71-4c19-8fa1-bdca787b20c1/docker-build/0.log" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.586729 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"661d7c06-4a71-4c19-8fa1-bdca787b20c1","Type":"ContainerDied","Data":"87586fe90c2425fe7d5e7682b0325feea1e33f7914939e906cffe4b6a4e486d3"} Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.586805 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87586fe90c2425fe7d5e7682b0325feea1e33f7914939e906cffe4b6a4e486d3" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.644178 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_661d7c06-4a71-4c19-8fa1-bdca787b20c1/docker-build/0.log" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.645279 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802068 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bgfk\" (UniqueName: \"kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802183 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802208 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802233 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802272 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802297 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802343 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802394 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802444 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802465 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802502 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.802558 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run\") pod \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\" (UID: \"661d7c06-4a71-4c19-8fa1-bdca787b20c1\") " Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.803424 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.803516 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.803696 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.803953 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.804078 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.804638 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.804946 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.808344 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.808927 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.810783 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.814933 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.817076 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk" (OuterVolumeSpecName: "kube-api-access-7bgfk") pod "661d7c06-4a71-4c19-8fa1-bdca787b20c1" (UID: "661d7c06-4a71-4c19-8fa1-bdca787b20c1"). InnerVolumeSpecName "kube-api-access-7bgfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904715 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904760 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904775 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904787 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904798 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bgfk\" (UniqueName: \"kubernetes.io/projected/661d7c06-4a71-4c19-8fa1-bdca787b20c1-kube-api-access-7bgfk\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904809 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904820 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904833 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904844 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/661d7c06-4a71-4c19-8fa1-bdca787b20c1-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904857 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904868 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:14 crc kubenswrapper[4947]: I0125 00:23:14.904881 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/661d7c06-4a71-4c19-8fa1-bdca787b20c1-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:23:15 crc kubenswrapper[4947]: I0125 00:23:15.589989 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 25 00:23:15 crc kubenswrapper[4947]: I0125 00:23:15.627458 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:23:15 crc kubenswrapper[4947]: I0125 00:23:15.643094 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 25 00:23:17 crc kubenswrapper[4947]: I0125 00:23:17.117857 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" path="/var/lib/kubelet/pods/661d7c06-4a71-4c19-8fa1-bdca787b20c1/volumes" Jan 25 00:23:25 crc kubenswrapper[4947]: I0125 00:23:25.656744 4947 generic.go:334] "Generic (PLEG): container finished" podID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerID="21824eaf8102a5c1782785b66dfb26808429386a84873ff0bff3535456020c43" exitCode=0 Jan 25 00:23:25 crc kubenswrapper[4947]: I0125 00:23:25.656841 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerDied","Data":"21824eaf8102a5c1782785b66dfb26808429386a84873ff0bff3535456020c43"} Jan 25 00:23:26 crc kubenswrapper[4947]: I0125 00:23:26.664375 4947 generic.go:334] "Generic (PLEG): container finished" podID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerID="a59fb62c9dc1e7977c32c6df788cb5db1cec709bb8d7251a71d0d7328202f9cd" exitCode=0 Jan 25 00:23:26 crc kubenswrapper[4947]: I0125 00:23:26.664616 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerDied","Data":"a59fb62c9dc1e7977c32c6df788cb5db1cec709bb8d7251a71d0d7328202f9cd"} Jan 25 00:23:26 crc kubenswrapper[4947]: I0125 00:23:26.722870 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_f499c9d2-38e0-4cb5-a5d2-1b0142726c5e/manage-dockerfile/0.log" Jan 25 00:23:27 crc kubenswrapper[4947]: I0125 00:23:27.676372 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerStarted","Data":"2d797b0b6199276b6cd59577ed9b25e9afa5b49c9ace218d218e4e1f4e98c455"} Jan 25 00:23:27 crc kubenswrapper[4947]: I0125 00:23:27.716949 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=24.716931469 podStartE2EDuration="24.716931469s" podCreationTimestamp="2026-01-25 00:23:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:23:27.7148561 +0000 UTC m=+846.947846560" watchObservedRunningTime="2026-01-25 00:23:27.716931469 +0000 UTC m=+846.949921919" Jan 25 00:23:47 crc kubenswrapper[4947]: I0125 00:23:47.073080 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:23:47 crc kubenswrapper[4947]: I0125 00:23:47.073863 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.786875 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:00 crc kubenswrapper[4947]: E0125 00:24:00.787937 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="manage-dockerfile" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.787959 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="manage-dockerfile" Jan 25 00:24:00 crc kubenswrapper[4947]: E0125 00:24:00.787984 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="docker-build" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.787995 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="docker-build" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.788205 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="661d7c06-4a71-4c19-8fa1-bdca787b20c1" containerName="docker-build" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.789614 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.795968 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.845736 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.845814 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwvk9\" (UniqueName: \"kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.845955 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.947338 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwvk9\" (UniqueName: \"kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.947909 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.948480 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.948660 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.949018 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:00 crc kubenswrapper[4947]: I0125 00:24:00.976252 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwvk9\" (UniqueName: \"kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9\") pod \"community-operators-4mr96\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:01 crc kubenswrapper[4947]: I0125 00:24:01.114414 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:01 crc kubenswrapper[4947]: I0125 00:24:01.391304 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:02 crc kubenswrapper[4947]: I0125 00:24:02.127411 4947 generic.go:334] "Generic (PLEG): container finished" podID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerID="1ffe76381fa0fd773a56c6c8064db7fcd61ac5c99771a2a85062c8791b58dc75" exitCode=0 Jan 25 00:24:02 crc kubenswrapper[4947]: I0125 00:24:02.127676 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerDied","Data":"1ffe76381fa0fd773a56c6c8064db7fcd61ac5c99771a2a85062c8791b58dc75"} Jan 25 00:24:02 crc kubenswrapper[4947]: I0125 00:24:02.127855 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerStarted","Data":"976eff545a016fc3cc718daaa9571b46942fe16057e80a3eebd96df87ec75e65"} Jan 25 00:24:03 crc kubenswrapper[4947]: I0125 00:24:03.135477 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerStarted","Data":"6e1eaa0a1619dff5720dacc9a54e6f3666f22f1e999581dd65ae0281a50e0fd2"} Jan 25 00:24:04 crc kubenswrapper[4947]: I0125 00:24:04.144845 4947 generic.go:334] "Generic (PLEG): container finished" podID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerID="6e1eaa0a1619dff5720dacc9a54e6f3666f22f1e999581dd65ae0281a50e0fd2" exitCode=0 Jan 25 00:24:04 crc kubenswrapper[4947]: I0125 00:24:04.144908 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerDied","Data":"6e1eaa0a1619dff5720dacc9a54e6f3666f22f1e999581dd65ae0281a50e0fd2"} Jan 25 00:24:05 crc kubenswrapper[4947]: I0125 00:24:05.153028 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerStarted","Data":"aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd"} Jan 25 00:24:05 crc kubenswrapper[4947]: I0125 00:24:05.175594 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4mr96" podStartSLOduration=2.723664477 podStartE2EDuration="5.175576474s" podCreationTimestamp="2026-01-25 00:24:00 +0000 UTC" firstStartedPulling="2026-01-25 00:24:02.130557151 +0000 UTC m=+881.363547631" lastFinishedPulling="2026-01-25 00:24:04.582469148 +0000 UTC m=+883.815459628" observedRunningTime="2026-01-25 00:24:05.172188953 +0000 UTC m=+884.405179413" watchObservedRunningTime="2026-01-25 00:24:05.175576474 +0000 UTC m=+884.408566914" Jan 25 00:24:11 crc kubenswrapper[4947]: I0125 00:24:11.115819 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:11 crc kubenswrapper[4947]: I0125 00:24:11.116315 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:11 crc kubenswrapper[4947]: I0125 00:24:11.176862 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:11 crc kubenswrapper[4947]: I0125 00:24:11.275472 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:11 crc kubenswrapper[4947]: I0125 00:24:11.427420 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:13 crc kubenswrapper[4947]: I0125 00:24:13.217428 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4mr96" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="registry-server" containerID="cri-o://aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd" gracePeriod=2 Jan 25 00:24:13 crc kubenswrapper[4947]: E0125 00:24:13.316275 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd01d9565_1bc1_4895_ab51_3c469f07d4c6.slice/crio-aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd.scope\": RecentStats: unable to find data in memory cache]" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.072334 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.072702 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.252712 4947 generic.go:334] "Generic (PLEG): container finished" podID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerID="aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd" exitCode=0 Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.252927 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerDied","Data":"aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd"} Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.319117 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.379598 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwvk9\" (UniqueName: \"kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9\") pod \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.379641 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities\") pod \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.379726 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content\") pod \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\" (UID: \"d01d9565-1bc1-4895-ab51-3c469f07d4c6\") " Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.380683 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities" (OuterVolumeSpecName: "utilities") pod "d01d9565-1bc1-4895-ab51-3c469f07d4c6" (UID: "d01d9565-1bc1-4895-ab51-3c469f07d4c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.385895 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9" (OuterVolumeSpecName: "kube-api-access-wwvk9") pod "d01d9565-1bc1-4895-ab51-3c469f07d4c6" (UID: "d01d9565-1bc1-4895-ab51-3c469f07d4c6"). InnerVolumeSpecName "kube-api-access-wwvk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.424762 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d01d9565-1bc1-4895-ab51-3c469f07d4c6" (UID: "d01d9565-1bc1-4895-ab51-3c469f07d4c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.481244 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwvk9\" (UniqueName: \"kubernetes.io/projected/d01d9565-1bc1-4895-ab51-3c469f07d4c6-kube-api-access-wwvk9\") on node \"crc\" DevicePath \"\"" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.481293 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:24:17 crc kubenswrapper[4947]: I0125 00:24:17.481321 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d01d9565-1bc1-4895-ab51-3c469f07d4c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.266313 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4mr96" event={"ID":"d01d9565-1bc1-4895-ab51-3c469f07d4c6","Type":"ContainerDied","Data":"976eff545a016fc3cc718daaa9571b46942fe16057e80a3eebd96df87ec75e65"} Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.266383 4947 scope.go:117] "RemoveContainer" containerID="aa1e3f454f3e0f662f0c473f2ae2d780dea704c1c430dd5489c79186d8db5afd" Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.266474 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4mr96" Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.290820 4947 scope.go:117] "RemoveContainer" containerID="6e1eaa0a1619dff5720dacc9a54e6f3666f22f1e999581dd65ae0281a50e0fd2" Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.305817 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.311805 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4mr96"] Jan 25 00:24:18 crc kubenswrapper[4947]: I0125 00:24:18.315794 4947 scope.go:117] "RemoveContainer" containerID="1ffe76381fa0fd773a56c6c8064db7fcd61ac5c99771a2a85062c8791b58dc75" Jan 25 00:24:19 crc kubenswrapper[4947]: I0125 00:24:19.101967 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" path="/var/lib/kubelet/pods/d01d9565-1bc1-4895-ab51-3c469f07d4c6/volumes" Jan 25 00:24:47 crc kubenswrapper[4947]: I0125 00:24:47.072623 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:24:47 crc kubenswrapper[4947]: I0125 00:24:47.073266 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:24:47 crc kubenswrapper[4947]: I0125 00:24:47.073322 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:24:47 crc kubenswrapper[4947]: I0125 00:24:47.073932 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:24:47 crc kubenswrapper[4947]: I0125 00:24:47.073999 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39" gracePeriod=600 Jan 25 00:24:48 crc kubenswrapper[4947]: I0125 00:24:48.494476 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39"} Jan 25 00:24:48 crc kubenswrapper[4947]: I0125 00:24:48.495256 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39" exitCode=0 Jan 25 00:24:48 crc kubenswrapper[4947]: I0125 00:24:48.495330 4947 scope.go:117] "RemoveContainer" containerID="f64176709d5607e82072c383022bad7109ba8a5dcfa7f1ccb95f13edf0e8c935" Jan 25 00:24:48 crc kubenswrapper[4947]: I0125 00:24:48.495367 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1"} Jan 25 00:25:14 crc kubenswrapper[4947]: I0125 00:25:14.755652 4947 generic.go:334] "Generic (PLEG): container finished" podID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerID="2d797b0b6199276b6cd59577ed9b25e9afa5b49c9ace218d218e4e1f4e98c455" exitCode=0 Jan 25 00:25:14 crc kubenswrapper[4947]: I0125 00:25:14.755738 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerDied","Data":"2d797b0b6199276b6cd59577ed9b25e9afa5b49c9ace218d218e4e1f4e98c455"} Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.186739 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.254532 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.254617 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.254664 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc8cj\" (UniqueName: \"kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.254709 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.256055 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.254802 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258089 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258195 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258231 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258268 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258335 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258381 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258437 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258558 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root\") pod \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\" (UID: \"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e\") " Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258645 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.258653 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259172 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259231 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259686 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259716 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259730 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259743 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259754 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.259767 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.262590 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.262615 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.268767 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj" (OuterVolumeSpecName: "kube-api-access-jc8cj") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "kube-api-access-jc8cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.297330 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.361173 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.361205 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc8cj\" (UniqueName: \"kubernetes.io/projected/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-kube-api-access-jc8cj\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.361215 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.361227 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.432735 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.463396 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.780645 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"f499c9d2-38e0-4cb5-a5d2-1b0142726c5e","Type":"ContainerDied","Data":"57b1451538b238472d5680ea17ac3804ec620b15075d2890a204aa83384bdd41"} Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.780728 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57b1451538b238472d5680ea17ac3804ec620b15075d2890a204aa83384bdd41" Jan 25 00:25:16 crc kubenswrapper[4947]: I0125 00:25:16.780867 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 25 00:25:18 crc kubenswrapper[4947]: I0125 00:25:18.687773 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" (UID: "f499c9d2-38e0-4cb5-a5d2-1b0142726c5e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:18 crc kubenswrapper[4947]: I0125 00:25:18.704043 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f499c9d2-38e0-4cb5-a5d2-1b0142726c5e-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.930719 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931059 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="manage-dockerfile" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931079 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="manage-dockerfile" Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931099 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="docker-build" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931111 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="docker-build" Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931160 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="git-clone" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931173 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="git-clone" Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931195 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="registry-server" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931208 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="registry-server" Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931227 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="extract-utilities" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931239 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="extract-utilities" Jan 25 00:25:20 crc kubenswrapper[4947]: E0125 00:25:20.931256 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="extract-content" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931268 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="extract-content" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931442 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="d01d9565-1bc1-4895-ab51-3c469f07d4c6" containerName="registry-server" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.931458 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="f499c9d2-38e0-4cb5-a5d2-1b0142726c5e" containerName="docker-build" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.932472 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.935774 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.936098 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.937391 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.937395 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:25:20 crc kubenswrapper[4947]: I0125 00:25:20.953608 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.048995 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049103 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049200 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049259 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049347 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049411 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049464 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049530 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049693 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049822 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd9th\" (UniqueName: \"kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049897 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.049977 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151275 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151361 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151481 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151529 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151565 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151609 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151686 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151745 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151770 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151761 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151818 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151895 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.151949 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.152004 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd9th\" (UniqueName: \"kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.152448 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.152599 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.152687 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.152758 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.153159 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.156149 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.156265 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.156589 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.163142 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.163450 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.163647 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.171618 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.171909 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.181055 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd9th\" (UniqueName: \"kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th\") pod \"smart-gateway-operator-1-build\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.248605 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:21 crc kubenswrapper[4947]: W0125 00:25:21.528946 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e46fc8f_1d0e_4901_baf3_35b9c3d210e0.slice/crio-c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab WatchSource:0}: Error finding container c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab: Status 404 returned error can't find the container with id c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.531392 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:21 crc kubenswrapper[4947]: I0125 00:25:21.825100 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0","Type":"ContainerStarted","Data":"c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab"} Jan 25 00:25:22 crc kubenswrapper[4947]: I0125 00:25:22.836935 4947 generic.go:334] "Generic (PLEG): container finished" podID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerID="8859bd2c23cc381a47cfe2e04937484855d70dd800833fd36744559428ce5555" exitCode=0 Jan 25 00:25:22 crc kubenswrapper[4947]: I0125 00:25:22.837000 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0","Type":"ContainerDied","Data":"8859bd2c23cc381a47cfe2e04937484855d70dd800833fd36744559428ce5555"} Jan 25 00:25:23 crc kubenswrapper[4947]: I0125 00:25:23.847773 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0","Type":"ContainerStarted","Data":"028bccab312ad8a58a8f5190d26bb368c53b9a76f922c33081e772b4e9d903bf"} Jan 25 00:25:23 crc kubenswrapper[4947]: I0125 00:25:23.887789 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.887751905 podStartE2EDuration="3.887751905s" podCreationTimestamp="2026-01-25 00:25:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:25:23.882250403 +0000 UTC m=+963.115240883" watchObservedRunningTime="2026-01-25 00:25:23.887751905 +0000 UTC m=+963.120742385" Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.475089 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.476430 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="docker-build" containerID="cri-o://028bccab312ad8a58a8f5190d26bb368c53b9a76f922c33081e772b4e9d903bf" gracePeriod=30 Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.904753 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_4e46fc8f-1d0e-4901-baf3-35b9c3d210e0/docker-build/0.log" Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.905843 4947 generic.go:334] "Generic (PLEG): container finished" podID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerID="028bccab312ad8a58a8f5190d26bb368c53b9a76f922c33081e772b4e9d903bf" exitCode=1 Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.905894 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0","Type":"ContainerDied","Data":"028bccab312ad8a58a8f5190d26bb368c53b9a76f922c33081e772b4e9d903bf"} Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.905958 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0","Type":"ContainerDied","Data":"c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab"} Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.905980 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c60a3d6c1b97fcd7d4338672133bef50222a63be5f9dcf1ac6181b7bd30b02ab" Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.926180 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_4e46fc8f-1d0e-4901-baf3-35b9c3d210e0/docker-build/0.log" Jan 25 00:25:31 crc kubenswrapper[4947]: I0125 00:25:31.927348 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113326 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113443 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113515 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113554 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113593 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113628 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113665 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113659 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113693 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113832 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113880 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd9th\" (UniqueName: \"kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.113933 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.114035 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.114081 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir\") pod \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\" (UID: \"4e46fc8f-1d0e-4901-baf3-35b9c3d210e0\") " Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.114839 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.115213 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.115272 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.115453 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.116011 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.116109 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.116387 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.121258 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.122937 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.124028 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th" (OuterVolumeSpecName: "kube-api-access-vd9th") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "kube-api-access-vd9th". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216853 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216913 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216937 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216957 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216977 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.216996 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.217015 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.217033 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd9th\" (UniqueName: \"kubernetes.io/projected/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-kube-api-access-vd9th\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.312853 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.318319 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.573349 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" (UID: "4e46fc8f-1d0e-4901-baf3-35b9c3d210e0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.622789 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.914536 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.966364 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:32 crc kubenswrapper[4947]: I0125 00:25:32.983654 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.102043 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" path="/var/lib/kubelet/pods/4e46fc8f-1d0e-4901-baf3-35b9c3d210e0/volumes" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.121386 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 25 00:25:33 crc kubenswrapper[4947]: E0125 00:25:33.121674 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="manage-dockerfile" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.121696 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="manage-dockerfile" Jan 25 00:25:33 crc kubenswrapper[4947]: E0125 00:25:33.121712 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="docker-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.121721 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="docker-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.121911 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e46fc8f-1d0e-4901-baf3-35b9c3d210e0" containerName="docker-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.123072 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.125988 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.126308 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.126525 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.127241 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.161381 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232055 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232196 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232230 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232274 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232296 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkr5w\" (UniqueName: \"kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232829 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.232941 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.233055 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.233173 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.233293 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.233376 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.233451 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335742 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335809 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335848 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335887 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335915 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkr5w\" (UniqueName: \"kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335938 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335972 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.335998 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336027 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336071 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336099 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336151 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336378 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336478 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.336624 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.337351 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.337433 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.337497 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.337348 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.337379 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.338546 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.343166 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.343174 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.362303 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkr5w\" (UniqueName: \"kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w\") pod \"smart-gateway-operator-2-build\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.441388 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.671602 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 25 00:25:33 crc kubenswrapper[4947]: I0125 00:25:33.923382 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerStarted","Data":"316740fcb3496165d3a51b4cf681d66241a263c85170f99c944e7e013a8dc739"} Jan 25 00:25:34 crc kubenswrapper[4947]: I0125 00:25:34.934582 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerStarted","Data":"de1a6e7022681b695c7adc8df24c0c416eef349bd035dbd66a5282f51988919d"} Jan 25 00:25:35 crc kubenswrapper[4947]: E0125 00:25:35.103258 4947 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.163:55100->38.102.83.163:33315: write tcp 38.102.83.163:55100->38.102.83.163:33315: write: connection reset by peer Jan 25 00:25:35 crc kubenswrapper[4947]: I0125 00:25:35.944372 4947 generic.go:334] "Generic (PLEG): container finished" podID="24dd009b-58df-485b-b901-4a51266605a5" containerID="de1a6e7022681b695c7adc8df24c0c416eef349bd035dbd66a5282f51988919d" exitCode=0 Jan 25 00:25:35 crc kubenswrapper[4947]: I0125 00:25:35.944422 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerDied","Data":"de1a6e7022681b695c7adc8df24c0c416eef349bd035dbd66a5282f51988919d"} Jan 25 00:25:36 crc kubenswrapper[4947]: I0125 00:25:36.955173 4947 generic.go:334] "Generic (PLEG): container finished" podID="24dd009b-58df-485b-b901-4a51266605a5" containerID="654cbcf4fecd28f2a127e8c0c0ea41930b9994d3b82a1dd577d5782a16c50141" exitCode=0 Jan 25 00:25:36 crc kubenswrapper[4947]: I0125 00:25:36.955252 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerDied","Data":"654cbcf4fecd28f2a127e8c0c0ea41930b9994d3b82a1dd577d5782a16c50141"} Jan 25 00:25:37 crc kubenswrapper[4947]: I0125 00:25:37.007706 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_24dd009b-58df-485b-b901-4a51266605a5/manage-dockerfile/0.log" Jan 25 00:25:37 crc kubenswrapper[4947]: I0125 00:25:37.973593 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerStarted","Data":"353fad3d8231255be347fabf3161b03de5be600c5db257bc7c559537402b9c78"} Jan 25 00:25:38 crc kubenswrapper[4947]: I0125 00:25:38.037446 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=5.037426023 podStartE2EDuration="5.037426023s" podCreationTimestamp="2026-01-25 00:25:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:25:38.029316449 +0000 UTC m=+977.262306889" watchObservedRunningTime="2026-01-25 00:25:38.037426023 +0000 UTC m=+977.270416473" Jan 25 00:27:00 crc kubenswrapper[4947]: I0125 00:27:00.563625 4947 generic.go:334] "Generic (PLEG): container finished" podID="24dd009b-58df-485b-b901-4a51266605a5" containerID="353fad3d8231255be347fabf3161b03de5be600c5db257bc7c559537402b9c78" exitCode=0 Jan 25 00:27:00 crc kubenswrapper[4947]: I0125 00:27:00.563722 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerDied","Data":"353fad3d8231255be347fabf3161b03de5be600c5db257bc7c559537402b9c78"} Jan 25 00:27:01 crc kubenswrapper[4947]: I0125 00:27:01.881947 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041533 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041625 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041660 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041700 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041747 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041813 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkr5w\" (UniqueName: \"kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041843 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041930 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.041980 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.042017 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.042068 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.042117 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets\") pod \"24dd009b-58df-485b-b901-4a51266605a5\" (UID: \"24dd009b-58df-485b-b901-4a51266605a5\") " Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.042632 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.042682 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.043035 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.044011 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.044730 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.045331 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.048779 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.049406 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.050341 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w" (OuterVolumeSpecName: "kube-api-access-dkr5w") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "kube-api-access-dkr5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.050713 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144622 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144671 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144684 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144695 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144708 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/24dd009b-58df-485b-b901-4a51266605a5-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144720 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkr5w\" (UniqueName: \"kubernetes.io/projected/24dd009b-58df-485b-b901-4a51266605a5-kube-api-access-dkr5w\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144731 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/24dd009b-58df-485b-b901-4a51266605a5-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144742 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144753 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/24dd009b-58df-485b-b901-4a51266605a5-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.144767 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.234909 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.246747 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.587467 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"24dd009b-58df-485b-b901-4a51266605a5","Type":"ContainerDied","Data":"316740fcb3496165d3a51b4cf681d66241a263c85170f99c944e7e013a8dc739"} Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.587543 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316740fcb3496165d3a51b4cf681d66241a263c85170f99c944e7e013a8dc739" Jan 25 00:27:02 crc kubenswrapper[4947]: I0125 00:27:02.587621 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 25 00:27:03 crc kubenswrapper[4947]: I0125 00:27:03.976626 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "24dd009b-58df-485b-b901-4a51266605a5" (UID: "24dd009b-58df-485b-b901-4a51266605a5"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:04 crc kubenswrapper[4947]: I0125 00:27:04.075722 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/24dd009b-58df-485b-b901-4a51266605a5-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.802006 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:06 crc kubenswrapper[4947]: E0125 00:27:06.802586 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="docker-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.802598 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="docker-build" Jan 25 00:27:06 crc kubenswrapper[4947]: E0125 00:27:06.802606 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="manage-dockerfile" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.802612 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="manage-dockerfile" Jan 25 00:27:06 crc kubenswrapper[4947]: E0125 00:27:06.802635 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="git-clone" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.802642 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="git-clone" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.802740 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="24dd009b-58df-485b-b901-4a51266605a5" containerName="docker-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.803340 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.805021 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.805149 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.805084 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.812531 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.813654 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915229 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbnkq\" (UniqueName: \"kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915297 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915327 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915349 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915427 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915474 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915510 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915535 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915570 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915621 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915650 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:06 crc kubenswrapper[4947]: I0125 00:27:06.915679 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.018856 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.018942 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbnkq\" (UniqueName: \"kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019005 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019050 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019176 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019224 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019302 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019375 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019442 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019520 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019581 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.019656 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.020572 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.020789 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.020876 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.020989 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.021045 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.021656 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.032061 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.034475 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.240404 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.241326 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.241419 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.246508 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbnkq\" (UniqueName: \"kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq\") pod \"sg-core-1-build\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.418907 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 25 00:27:07 crc kubenswrapper[4947]: I0125 00:27:07.687024 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:08 crc kubenswrapper[4947]: I0125 00:27:08.654461 4947 generic.go:334] "Generic (PLEG): container finished" podID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerID="4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0" exitCode=0 Jan 25 00:27:08 crc kubenswrapper[4947]: I0125 00:27:08.654544 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"3b32a95f-4c19-445e-b87b-cbe8bc5f201a","Type":"ContainerDied","Data":"4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0"} Jan 25 00:27:08 crc kubenswrapper[4947]: I0125 00:27:08.654915 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"3b32a95f-4c19-445e-b87b-cbe8bc5f201a","Type":"ContainerStarted","Data":"1a28b2860ddc19f895a721896df139ceca9edbb20b78b9c62eb78485a1f6d3b6"} Jan 25 00:27:09 crc kubenswrapper[4947]: I0125 00:27:09.673441 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"3b32a95f-4c19-445e-b87b-cbe8bc5f201a","Type":"ContainerStarted","Data":"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8"} Jan 25 00:27:09 crc kubenswrapper[4947]: I0125 00:27:09.701969 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.701948 podStartE2EDuration="3.701948s" podCreationTimestamp="2026-01-25 00:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:27:09.696570791 +0000 UTC m=+1068.929561241" watchObservedRunningTime="2026-01-25 00:27:09.701948 +0000 UTC m=+1068.934938460" Jan 25 00:27:17 crc kubenswrapper[4947]: I0125 00:27:17.073657 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:27:17 crc kubenswrapper[4947]: I0125 00:27:17.074766 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:27:17 crc kubenswrapper[4947]: I0125 00:27:17.567703 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:17 crc kubenswrapper[4947]: I0125 00:27:17.568040 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="docker-build" containerID="cri-o://269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8" gracePeriod=30 Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.577470 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_3b32a95f-4c19-445e-b87b-cbe8bc5f201a/docker-build/0.log" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.578846 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708266 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708385 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbnkq\" (UniqueName: \"kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708485 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708574 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708616 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708662 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708681 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708710 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708831 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708885 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708932 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.708998 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.709040 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir\") pod \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\" (UID: \"3b32a95f-4c19-445e-b87b-cbe8bc5f201a\") " Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.709649 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.709643 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.709694 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.709821 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.710599 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.710867 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.711352 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.716236 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.725506 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq" (OuterVolumeSpecName: "kube-api-access-sbnkq") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "kube-api-access-sbnkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.725523 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.740543 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_3b32a95f-4c19-445e-b87b-cbe8bc5f201a/docker-build/0.log" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.741273 4947 generic.go:334] "Generic (PLEG): container finished" podID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerID="269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8" exitCode=1 Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.741419 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"3b32a95f-4c19-445e-b87b-cbe8bc5f201a","Type":"ContainerDied","Data":"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8"} Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.741592 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"3b32a95f-4c19-445e-b87b-cbe8bc5f201a","Type":"ContainerDied","Data":"1a28b2860ddc19f895a721896df139ceca9edbb20b78b9c62eb78485a1f6d3b6"} Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.741716 4947 scope.go:117] "RemoveContainer" containerID="269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.741980 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811308 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811650 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811661 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811669 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbnkq\" (UniqueName: \"kubernetes.io/projected/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-kube-api-access-sbnkq\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811680 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811689 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811699 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811709 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.811721 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.817720 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.841731 4947 scope.go:117] "RemoveContainer" containerID="4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.854084 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3b32a95f-4c19-445e-b87b-cbe8bc5f201a" (UID: "3b32a95f-4c19-445e-b87b-cbe8bc5f201a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.878801 4947 scope.go:117] "RemoveContainer" containerID="269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8" Jan 25 00:27:18 crc kubenswrapper[4947]: E0125 00:27:18.879275 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8\": container with ID starting with 269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8 not found: ID does not exist" containerID="269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.879313 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8"} err="failed to get container status \"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8\": rpc error: code = NotFound desc = could not find container \"269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8\": container with ID starting with 269062e4b582f0e1ad876b58f51d6afeb53d9d76655524c9b108801c628524e8 not found: ID does not exist" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.879339 4947 scope.go:117] "RemoveContainer" containerID="4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0" Jan 25 00:27:18 crc kubenswrapper[4947]: E0125 00:27:18.879791 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0\": container with ID starting with 4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0 not found: ID does not exist" containerID="4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.879822 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0"} err="failed to get container status \"4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0\": rpc error: code = NotFound desc = could not find container \"4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0\": container with ID starting with 4ec22a781a4580f96f8bf5e0deaacc392abbbdc89cea199a4e3ae120a1042ef0 not found: ID does not exist" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.913688 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:18 crc kubenswrapper[4947]: I0125 00:27:18.913753 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3b32a95f-4c19-445e-b87b-cbe8bc5f201a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.077084 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.083968 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.100067 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" path="/var/lib/kubelet/pods/3b32a95f-4c19-445e-b87b-cbe8bc5f201a/volumes" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.181245 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 25 00:27:19 crc kubenswrapper[4947]: E0125 00:27:19.181586 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="manage-dockerfile" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.181611 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="manage-dockerfile" Jan 25 00:27:19 crc kubenswrapper[4947]: E0125 00:27:19.181636 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="docker-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.181646 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="docker-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.181785 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b32a95f-4c19-445e-b87b-cbe8bc5f201a" containerName="docker-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.182880 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.185369 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.185765 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.186084 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.186335 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.213475 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320410 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320468 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320501 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320526 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320552 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320572 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320603 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320619 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z46q6\" (UniqueName: \"kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320649 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320675 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320698 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.320716 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426105 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426204 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426276 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426334 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426370 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426444 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426503 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z46q6\" (UniqueName: \"kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426577 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426650 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426684 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426745 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426853 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.426912 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427090 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427168 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427299 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427328 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427379 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.427575 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.428090 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.431418 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.438641 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.438732 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.455738 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z46q6\" (UniqueName: \"kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6\") pod \"sg-core-2-build\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.500068 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 25 00:27:19 crc kubenswrapper[4947]: I0125 00:27:19.937821 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 25 00:27:19 crc kubenswrapper[4947]: W0125 00:27:19.941327 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod77af6791_c204_477c_b362_ce322fd18448.slice/crio-0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be WatchSource:0}: Error finding container 0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be: Status 404 returned error can't find the container with id 0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be Jan 25 00:27:20 crc kubenswrapper[4947]: I0125 00:27:20.765833 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerStarted","Data":"53cf344b52518198c3aa0c5dfedb17a9bae990fd26020258415ae4dd5eadf484"} Jan 25 00:27:20 crc kubenswrapper[4947]: I0125 00:27:20.766375 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerStarted","Data":"0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be"} Jan 25 00:27:21 crc kubenswrapper[4947]: I0125 00:27:21.773433 4947 generic.go:334] "Generic (PLEG): container finished" podID="77af6791-c204-477c-b362-ce322fd18448" containerID="53cf344b52518198c3aa0c5dfedb17a9bae990fd26020258415ae4dd5eadf484" exitCode=0 Jan 25 00:27:21 crc kubenswrapper[4947]: I0125 00:27:21.773545 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerDied","Data":"53cf344b52518198c3aa0c5dfedb17a9bae990fd26020258415ae4dd5eadf484"} Jan 25 00:27:22 crc kubenswrapper[4947]: I0125 00:27:22.783296 4947 generic.go:334] "Generic (PLEG): container finished" podID="77af6791-c204-477c-b362-ce322fd18448" containerID="ad26ed26fc6645ed7ca740ab7344045c7f2bab95a3573f26741ffbd26f39128b" exitCode=0 Jan 25 00:27:22 crc kubenswrapper[4947]: I0125 00:27:22.783362 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerDied","Data":"ad26ed26fc6645ed7ca740ab7344045c7f2bab95a3573f26741ffbd26f39128b"} Jan 25 00:27:22 crc kubenswrapper[4947]: I0125 00:27:22.834993 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_77af6791-c204-477c-b362-ce322fd18448/manage-dockerfile/0.log" Jan 25 00:27:23 crc kubenswrapper[4947]: I0125 00:27:23.797491 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerStarted","Data":"926b6653f0edfbf0c39f622f9fe1adc864ba4c8a2544121528ab9b4b3a607201"} Jan 25 00:27:23 crc kubenswrapper[4947]: I0125 00:27:23.824284 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=4.824258437 podStartE2EDuration="4.824258437s" podCreationTimestamp="2026-01-25 00:27:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:27:23.821540392 +0000 UTC m=+1083.054530842" watchObservedRunningTime="2026-01-25 00:27:23.824258437 +0000 UTC m=+1083.057248897" Jan 25 00:27:47 crc kubenswrapper[4947]: I0125 00:27:47.072375 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:27:47 crc kubenswrapper[4947]: I0125 00:27:47.072917 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:28:17 crc kubenswrapper[4947]: I0125 00:28:17.072281 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:28:17 crc kubenswrapper[4947]: I0125 00:28:17.072808 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:28:17 crc kubenswrapper[4947]: I0125 00:28:17.072872 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:28:17 crc kubenswrapper[4947]: I0125 00:28:17.073516 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:28:17 crc kubenswrapper[4947]: I0125 00:28:17.073581 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1" gracePeriod=600 Jan 25 00:28:25 crc kubenswrapper[4947]: I0125 00:28:25.428455 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-mdgrh_5f67ec28-baae-409e-a42d-03a486e7a26b/machine-config-daemon/5.log" Jan 25 00:28:25 crc kubenswrapper[4947]: I0125 00:28:25.433100 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1" exitCode=-1 Jan 25 00:28:25 crc kubenswrapper[4947]: I0125 00:28:25.433169 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1"} Jan 25 00:28:25 crc kubenswrapper[4947]: I0125 00:28:25.433205 4947 scope.go:117] "RemoveContainer" containerID="3d54be0f7cb7c92bd5d7a293f8c1b17de01e73a4d2768de922b6d4f49ead1a39" Jan 25 00:28:26 crc kubenswrapper[4947]: I0125 00:28:26.442036 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7"} Jan 25 00:29:28 crc kubenswrapper[4947]: I0125 00:29:28.502075 4947 scope.go:117] "RemoveContainer" containerID="29e25cd653e5646a037201713219653ddf735f00af3b32cacab158f9791bc90a" Jan 25 00:29:28 crc kubenswrapper[4947]: I0125 00:29:28.551725 4947 scope.go:117] "RemoveContainer" containerID="42d0401113a9cc459f1654c1224909ba635689103f9bf3566db1b01252300549" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.141207 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px"] Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.142394 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.145351 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.151008 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.161815 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px"] Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.294834 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.294892 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svvn4\" (UniqueName: \"kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.294930 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.395967 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.396022 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svvn4\" (UniqueName: \"kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.396056 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.396750 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.402381 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.427895 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svvn4\" (UniqueName: \"kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4\") pod \"collect-profiles-29488350-fm7px\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.460956 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:00 crc kubenswrapper[4947]: I0125 00:30:00.929470 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px"] Jan 25 00:30:01 crc kubenswrapper[4947]: I0125 00:30:01.417321 4947 generic.go:334] "Generic (PLEG): container finished" podID="ee769c6a-9981-4818-8ae5-842f6937caec" containerID="93fcc613c9ba4599c5f6015e7f766475a34b2493629af138060b766f5b39aba0" exitCode=0 Jan 25 00:30:01 crc kubenswrapper[4947]: I0125 00:30:01.417432 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" event={"ID":"ee769c6a-9981-4818-8ae5-842f6937caec","Type":"ContainerDied","Data":"93fcc613c9ba4599c5f6015e7f766475a34b2493629af138060b766f5b39aba0"} Jan 25 00:30:01 crc kubenswrapper[4947]: I0125 00:30:01.417641 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" event={"ID":"ee769c6a-9981-4818-8ae5-842f6937caec","Type":"ContainerStarted","Data":"f851f5b340aac89ef44e8051e40a845b0c9d4c36d60825cd7f7839fec0cbd98f"} Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.671365 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.830010 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume\") pod \"ee769c6a-9981-4818-8ae5-842f6937caec\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.830090 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svvn4\" (UniqueName: \"kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4\") pod \"ee769c6a-9981-4818-8ae5-842f6937caec\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.830170 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume\") pod \"ee769c6a-9981-4818-8ae5-842f6937caec\" (UID: \"ee769c6a-9981-4818-8ae5-842f6937caec\") " Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.831074 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume" (OuterVolumeSpecName: "config-volume") pod "ee769c6a-9981-4818-8ae5-842f6937caec" (UID: "ee769c6a-9981-4818-8ae5-842f6937caec"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.835576 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ee769c6a-9981-4818-8ae5-842f6937caec" (UID: "ee769c6a-9981-4818-8ae5-842f6937caec"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.835612 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4" (OuterVolumeSpecName: "kube-api-access-svvn4") pod "ee769c6a-9981-4818-8ae5-842f6937caec" (UID: "ee769c6a-9981-4818-8ae5-842f6937caec"). InnerVolumeSpecName "kube-api-access-svvn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.932054 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-svvn4\" (UniqueName: \"kubernetes.io/projected/ee769c6a-9981-4818-8ae5-842f6937caec-kube-api-access-svvn4\") on node \"crc\" DevicePath \"\"" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.932181 4947 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ee769c6a-9981-4818-8ae5-842f6937caec-config-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:30:02 crc kubenswrapper[4947]: I0125 00:30:02.932203 4947 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ee769c6a-9981-4818-8ae5-842f6937caec-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 25 00:30:03 crc kubenswrapper[4947]: I0125 00:30:03.430913 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" event={"ID":"ee769c6a-9981-4818-8ae5-842f6937caec","Type":"ContainerDied","Data":"f851f5b340aac89ef44e8051e40a845b0c9d4c36d60825cd7f7839fec0cbd98f"} Jan 25 00:30:03 crc kubenswrapper[4947]: I0125 00:30:03.430962 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f851f5b340aac89ef44e8051e40a845b0c9d4c36d60825cd7f7839fec0cbd98f" Jan 25 00:30:03 crc kubenswrapper[4947]: I0125 00:30:03.431009 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29488350-fm7px" Jan 25 00:30:47 crc kubenswrapper[4947]: I0125 00:30:47.073338 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:30:47 crc kubenswrapper[4947]: I0125 00:30:47.074263 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:31:16 crc kubenswrapper[4947]: I0125 00:31:16.967892 4947 generic.go:334] "Generic (PLEG): container finished" podID="77af6791-c204-477c-b362-ce322fd18448" containerID="926b6653f0edfbf0c39f622f9fe1adc864ba4c8a2544121528ab9b4b3a607201" exitCode=0 Jan 25 00:31:16 crc kubenswrapper[4947]: I0125 00:31:16.968013 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerDied","Data":"926b6653f0edfbf0c39f622f9fe1adc864ba4c8a2544121528ab9b4b3a607201"} Jan 25 00:31:17 crc kubenswrapper[4947]: I0125 00:31:17.073389 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:31:17 crc kubenswrapper[4947]: I0125 00:31:17.074265 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.273865 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314646 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314721 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314746 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314779 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314829 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314851 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z46q6\" (UniqueName: \"kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314915 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.314949 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315001 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315030 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315061 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315089 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets\") pod \"77af6791-c204-477c-b362-ce322fd18448\" (UID: \"77af6791-c204-477c-b362-ce322fd18448\") " Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315398 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.315539 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.316756 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.316872 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.319780 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.327771 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.331488 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6" (OuterVolumeSpecName: "kube-api-access-z46q6") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "kube-api-access-z46q6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.332542 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.333182 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.339819 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417100 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417210 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417235 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417252 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417269 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z46q6\" (UniqueName: \"kubernetes.io/projected/77af6791-c204-477c-b362-ce322fd18448-kube-api-access-z46q6\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417281 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/77af6791-c204-477c-b362-ce322fd18448-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417293 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/77af6791-c204-477c-b362-ce322fd18448-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417305 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/77af6791-c204-477c-b362-ce322fd18448-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417317 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.417329 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.694007 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.721783 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.990082 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"77af6791-c204-477c-b362-ce322fd18448","Type":"ContainerDied","Data":"0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be"} Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.990182 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0759e43021908768d14063019ea0f5cf22f849b7ea0a3f13a5fc0eeba338e9be" Jan 25 00:31:18 crc kubenswrapper[4947]: I0125 00:31:18.990316 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 25 00:31:21 crc kubenswrapper[4947]: I0125 00:31:21.146420 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "77af6791-c204-477c-b362-ce322fd18448" (UID: "77af6791-c204-477c-b362-ce322fd18448"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:21 crc kubenswrapper[4947]: I0125 00:31:21.150930 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/77af6791-c204-477c-b362-ce322fd18448-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088045 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:23 crc kubenswrapper[4947]: E0125 00:31:23.088324 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="docker-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088339 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="docker-build" Jan 25 00:31:23 crc kubenswrapper[4947]: E0125 00:31:23.088350 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee769c6a-9981-4818-8ae5-842f6937caec" containerName="collect-profiles" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088358 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee769c6a-9981-4818-8ae5-842f6937caec" containerName="collect-profiles" Jan 25 00:31:23 crc kubenswrapper[4947]: E0125 00:31:23.088370 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="git-clone" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088378 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="git-clone" Jan 25 00:31:23 crc kubenswrapper[4947]: E0125 00:31:23.088392 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="manage-dockerfile" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088400 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="manage-dockerfile" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088546 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee769c6a-9981-4818-8ae5-842f6937caec" containerName="collect-profiles" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.088559 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="77af6791-c204-477c-b362-ce322fd18448" containerName="docker-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.089324 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.102765 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.102937 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.103071 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.103120 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.113919 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179166 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179224 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179266 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179293 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179413 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179548 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179626 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179682 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179727 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl4xl\" (UniqueName: \"kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179790 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.179947 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.180000 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.280591 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.280635 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.280661 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.280681 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl4xl\" (UniqueName: \"kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.280697 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281055 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281090 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281450 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281179 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281391 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281210 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281645 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281659 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281469 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281722 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281744 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281777 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281795 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.281841 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.282246 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.282258 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.289656 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.290051 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.305379 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl4xl\" (UniqueName: \"kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl\") pod \"sg-bridge-1-build\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.421030 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:23 crc kubenswrapper[4947]: I0125 00:31:23.649211 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:24 crc kubenswrapper[4947]: I0125 00:31:24.021709 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerStarted","Data":"3b9968f7292e05181c1395dde57498d955080e1dcb49d9484e57992f06fed9b1"} Jan 25 00:31:24 crc kubenswrapper[4947]: I0125 00:31:24.021763 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerStarted","Data":"a586952b97453573c2d5734b96500c0e485fae2fd231d75cb06ecf54a70cb8b6"} Jan 25 00:31:25 crc kubenswrapper[4947]: I0125 00:31:25.033567 4947 generic.go:334] "Generic (PLEG): container finished" podID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerID="3b9968f7292e05181c1395dde57498d955080e1dcb49d9484e57992f06fed9b1" exitCode=0 Jan 25 00:31:25 crc kubenswrapper[4947]: I0125 00:31:25.033633 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerDied","Data":"3b9968f7292e05181c1395dde57498d955080e1dcb49d9484e57992f06fed9b1"} Jan 25 00:31:26 crc kubenswrapper[4947]: I0125 00:31:26.042770 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerStarted","Data":"8bd3c19ebf87dfa9492f0b6c526fd93072c03b5bf0b404a53c130aa373ce7a49"} Jan 25 00:31:26 crc kubenswrapper[4947]: I0125 00:31:26.071764 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.07174281 podStartE2EDuration="3.07174281s" podCreationTimestamp="2026-01-25 00:31:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:31:26.071660378 +0000 UTC m=+1325.304650838" watchObservedRunningTime="2026-01-25 00:31:26.07174281 +0000 UTC m=+1325.304733260" Jan 25 00:31:28 crc kubenswrapper[4947]: I0125 00:31:28.644612 4947 scope.go:117] "RemoveContainer" containerID="8859bd2c23cc381a47cfe2e04937484855d70dd800833fd36744559428ce5555" Jan 25 00:31:28 crc kubenswrapper[4947]: I0125 00:31:28.683870 4947 scope.go:117] "RemoveContainer" containerID="028bccab312ad8a58a8f5190d26bb368c53b9a76f922c33081e772b4e9d903bf" Jan 25 00:31:33 crc kubenswrapper[4947]: I0125 00:31:33.810246 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:33 crc kubenswrapper[4947]: I0125 00:31:33.812863 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="docker-build" containerID="cri-o://8bd3c19ebf87dfa9492f0b6c526fd93072c03b5bf0b404a53c130aa373ce7a49" gracePeriod=30 Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.092605 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_b66303d6-9f4a-401f-8dc6-5855a51b28cb/docker-build/0.log" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.093330 4947 generic.go:334] "Generic (PLEG): container finished" podID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerID="8bd3c19ebf87dfa9492f0b6c526fd93072c03b5bf0b404a53c130aa373ce7a49" exitCode=1 Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.093428 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerDied","Data":"8bd3c19ebf87dfa9492f0b6c526fd93072c03b5bf0b404a53c130aa373ce7a49"} Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.093480 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b66303d6-9f4a-401f-8dc6-5855a51b28cb","Type":"ContainerDied","Data":"a586952b97453573c2d5734b96500c0e485fae2fd231d75cb06ecf54a70cb8b6"} Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.093500 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a586952b97453573c2d5734b96500c0e485fae2fd231d75cb06ecf54a70cb8b6" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.137397 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_b66303d6-9f4a-401f-8dc6-5855a51b28cb/docker-build/0.log" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.138219 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255118 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl4xl\" (UniqueName: \"kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255246 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255286 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255341 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255373 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255410 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255431 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255492 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255522 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255556 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255582 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.255630 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root\") pod \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\" (UID: \"b66303d6-9f4a-401f-8dc6-5855a51b28cb\") " Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256163 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256238 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256430 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256671 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256905 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.256947 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.257065 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.263556 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl" (OuterVolumeSpecName: "kube-api-access-sl4xl") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "kube-api-access-sl4xl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.266554 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.267180 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.340952 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356782 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356816 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356826 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356834 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356843 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356852 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356860 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl4xl\" (UniqueName: \"kubernetes.io/projected/b66303d6-9f4a-401f-8dc6-5855a51b28cb-kube-api-access-sl4xl\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356868 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356876 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b66303d6-9f4a-401f-8dc6-5855a51b28cb-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356887 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.356894 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/b66303d6-9f4a-401f-8dc6-5855a51b28cb-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.728122 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b66303d6-9f4a-401f-8dc6-5855a51b28cb" (UID: "b66303d6-9f4a-401f-8dc6-5855a51b28cb"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:31:34 crc kubenswrapper[4947]: I0125 00:31:34.762853 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b66303d6-9f4a-401f-8dc6-5855a51b28cb-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.097908 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.142198 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.147163 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.472755 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 25 00:31:35 crc kubenswrapper[4947]: E0125 00:31:35.473336 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="docker-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.473439 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="docker-build" Jan 25 00:31:35 crc kubenswrapper[4947]: E0125 00:31:35.473528 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="manage-dockerfile" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.473595 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="manage-dockerfile" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.473777 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" containerName="docker-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.474870 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.476348 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.476504 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.477049 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.485290 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.509298 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571102 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571415 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571528 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571673 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571793 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571837 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5rgf\" (UniqueName: \"kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571887 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571916 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571945 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.571968 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.572024 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.572048 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.672472 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.672789 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.672910 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673005 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.672959 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673018 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673267 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673380 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5rgf\" (UniqueName: \"kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673476 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673571 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673663 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673743 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673874 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.673992 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674087 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674199 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674368 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674169 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674716 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.674765 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.675907 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.678631 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.679534 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.692601 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5rgf\" (UniqueName: \"kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf\") pod \"sg-bridge-2-build\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:35 crc kubenswrapper[4947]: I0125 00:31:35.790177 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 25 00:31:36 crc kubenswrapper[4947]: I0125 00:31:36.310648 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 25 00:31:37 crc kubenswrapper[4947]: I0125 00:31:37.097776 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b66303d6-9f4a-401f-8dc6-5855a51b28cb" path="/var/lib/kubelet/pods/b66303d6-9f4a-401f-8dc6-5855a51b28cb/volumes" Jan 25 00:31:37 crc kubenswrapper[4947]: I0125 00:31:37.109815 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerStarted","Data":"a942688021affba03664d49fe678826c656d3e63d2d4c5a24dd21f78de3cfbb0"} Jan 25 00:31:37 crc kubenswrapper[4947]: I0125 00:31:37.109865 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerStarted","Data":"41f32d7801129b67e6b416864000e59d2e61a76199bac60f5ea8a250a0e45fd8"} Jan 25 00:31:38 crc kubenswrapper[4947]: I0125 00:31:38.119766 4947 generic.go:334] "Generic (PLEG): container finished" podID="5e48b97d-b6da-46e1-805a-1573652be38c" containerID="a942688021affba03664d49fe678826c656d3e63d2d4c5a24dd21f78de3cfbb0" exitCode=0 Jan 25 00:31:38 crc kubenswrapper[4947]: I0125 00:31:38.119815 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerDied","Data":"a942688021affba03664d49fe678826c656d3e63d2d4c5a24dd21f78de3cfbb0"} Jan 25 00:31:39 crc kubenswrapper[4947]: I0125 00:31:39.127397 4947 generic.go:334] "Generic (PLEG): container finished" podID="5e48b97d-b6da-46e1-805a-1573652be38c" containerID="e2b932a776e7707de5ef6e958eef570d3418be9ae6cecee7ae5b145fffaa715d" exitCode=0 Jan 25 00:31:39 crc kubenswrapper[4947]: I0125 00:31:39.127472 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerDied","Data":"e2b932a776e7707de5ef6e958eef570d3418be9ae6cecee7ae5b145fffaa715d"} Jan 25 00:31:39 crc kubenswrapper[4947]: I0125 00:31:39.170182 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_5e48b97d-b6da-46e1-805a-1573652be38c/manage-dockerfile/0.log" Jan 25 00:31:40 crc kubenswrapper[4947]: I0125 00:31:40.147344 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerStarted","Data":"31eed73ad1635b22a1b5134bc06b4c2144f4ba65ecd33dca5ba7c98cd031bcc9"} Jan 25 00:31:40 crc kubenswrapper[4947]: I0125 00:31:40.361517 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.361487725 podStartE2EDuration="5.361487725s" podCreationTimestamp="2026-01-25 00:31:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:31:40.35307907 +0000 UTC m=+1339.586069550" watchObservedRunningTime="2026-01-25 00:31:40.361487725 +0000 UTC m=+1339.594478165" Jan 25 00:31:47 crc kubenswrapper[4947]: I0125 00:31:47.072239 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:31:47 crc kubenswrapper[4947]: I0125 00:31:47.072818 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:31:47 crc kubenswrapper[4947]: I0125 00:31:47.072866 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:31:47 crc kubenswrapper[4947]: I0125 00:31:47.073495 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:31:47 crc kubenswrapper[4947]: I0125 00:31:47.073559 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7" gracePeriod=600 Jan 25 00:31:48 crc kubenswrapper[4947]: I0125 00:31:48.196693 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7" exitCode=0 Jan 25 00:31:48 crc kubenswrapper[4947]: I0125 00:31:48.196734 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7"} Jan 25 00:31:48 crc kubenswrapper[4947]: I0125 00:31:48.197018 4947 scope.go:117] "RemoveContainer" containerID="16e9b3158d1aebf618ce87b5aa5158940b299b1ba44d29893c13b023abe239b1" Jan 25 00:31:49 crc kubenswrapper[4947]: I0125 00:31:49.204675 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f"} Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.454008 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.456258 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.483087 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.617703 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.617882 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lph5f\" (UniqueName: \"kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.617965 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.719289 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.719376 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lph5f\" (UniqueName: \"kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.719415 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.720015 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.720205 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.744071 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lph5f\" (UniqueName: \"kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f\") pod \"certified-operators-khvm7\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:29 crc kubenswrapper[4947]: I0125 00:32:29.772585 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:30 crc kubenswrapper[4947]: I0125 00:32:30.227889 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:30 crc kubenswrapper[4947]: I0125 00:32:30.535749 4947 generic.go:334] "Generic (PLEG): container finished" podID="af43bb2d-e763-416d-803d-16fec7332771" containerID="3b5d100dc6d315ff476d1d5918266454374cbb046c8dd09419aacaed19996070" exitCode=0 Jan 25 00:32:30 crc kubenswrapper[4947]: I0125 00:32:30.535813 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerDied","Data":"3b5d100dc6d315ff476d1d5918266454374cbb046c8dd09419aacaed19996070"} Jan 25 00:32:30 crc kubenswrapper[4947]: I0125 00:32:30.536148 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerStarted","Data":"d56d5895bb34a9485e4d216be271276937d7d7a301e7d79984299b26a86ea042"} Jan 25 00:32:30 crc kubenswrapper[4947]: I0125 00:32:30.539632 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.545705 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerStarted","Data":"eed5d033314da1bcbc2e52cf9e1e2bd3420778ab20933b2ef804e302ab801e05"} Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.644750 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.645837 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.664237 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.747144 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.747282 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4klrq\" (UniqueName: \"kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.747341 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.848737 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.848887 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.848975 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4klrq\" (UniqueName: \"kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.849423 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.849596 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.869675 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4klrq\" (UniqueName: \"kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq\") pod \"redhat-operators-b77dd\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:31 crc kubenswrapper[4947]: I0125 00:32:31.962209 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.174999 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:32 crc kubenswrapper[4947]: W0125 00:32:32.186635 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70732187_2d74_4e3a_9e95_79c235f70b3b.slice/crio-a9c1e2ac91c88bb7647a2359a8af63858c17c41cf83a204cddc41485659caf41 WatchSource:0}: Error finding container a9c1e2ac91c88bb7647a2359a8af63858c17c41cf83a204cddc41485659caf41: Status 404 returned error can't find the container with id a9c1e2ac91c88bb7647a2359a8af63858c17c41cf83a204cddc41485659caf41 Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.551868 4947 generic.go:334] "Generic (PLEG): container finished" podID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerID="c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6" exitCode=0 Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.551941 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerDied","Data":"c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6"} Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.551976 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerStarted","Data":"a9c1e2ac91c88bb7647a2359a8af63858c17c41cf83a204cddc41485659caf41"} Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.554002 4947 generic.go:334] "Generic (PLEG): container finished" podID="af43bb2d-e763-416d-803d-16fec7332771" containerID="eed5d033314da1bcbc2e52cf9e1e2bd3420778ab20933b2ef804e302ab801e05" exitCode=0 Jan 25 00:32:32 crc kubenswrapper[4947]: I0125 00:32:32.554092 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerDied","Data":"eed5d033314da1bcbc2e52cf9e1e2bd3420778ab20933b2ef804e302ab801e05"} Jan 25 00:32:33 crc kubenswrapper[4947]: I0125 00:32:33.562112 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerStarted","Data":"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de"} Jan 25 00:32:33 crc kubenswrapper[4947]: I0125 00:32:33.564824 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerStarted","Data":"a4bcd8438bd8d926fe64439908fcec662adf3b56502ee9d48c75bf06bbf3a249"} Jan 25 00:32:33 crc kubenswrapper[4947]: I0125 00:32:33.592850 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-khvm7" podStartSLOduration=2.0813164730000002 podStartE2EDuration="4.592831204s" podCreationTimestamp="2026-01-25 00:32:29 +0000 UTC" firstStartedPulling="2026-01-25 00:32:30.539283109 +0000 UTC m=+1389.772273549" lastFinishedPulling="2026-01-25 00:32:33.05079785 +0000 UTC m=+1392.283788280" observedRunningTime="2026-01-25 00:32:33.592078537 +0000 UTC m=+1392.825068977" watchObservedRunningTime="2026-01-25 00:32:33.592831204 +0000 UTC m=+1392.825821644" Jan 25 00:32:34 crc kubenswrapper[4947]: I0125 00:32:34.579040 4947 generic.go:334] "Generic (PLEG): container finished" podID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerID="c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de" exitCode=0 Jan 25 00:32:34 crc kubenswrapper[4947]: I0125 00:32:34.579250 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerDied","Data":"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de"} Jan 25 00:32:35 crc kubenswrapper[4947]: I0125 00:32:35.588452 4947 generic.go:334] "Generic (PLEG): container finished" podID="5e48b97d-b6da-46e1-805a-1573652be38c" containerID="31eed73ad1635b22a1b5134bc06b4c2144f4ba65ecd33dca5ba7c98cd031bcc9" exitCode=0 Jan 25 00:32:35 crc kubenswrapper[4947]: I0125 00:32:35.588606 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerDied","Data":"31eed73ad1635b22a1b5134bc06b4c2144f4ba65ecd33dca5ba7c98cd031bcc9"} Jan 25 00:32:35 crc kubenswrapper[4947]: I0125 00:32:35.592712 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerStarted","Data":"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c"} Jan 25 00:32:35 crc kubenswrapper[4947]: I0125 00:32:35.634609 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-b77dd" podStartSLOduration=2.06427527 podStartE2EDuration="4.634594221s" podCreationTimestamp="2026-01-25 00:32:31 +0000 UTC" firstStartedPulling="2026-01-25 00:32:32.553459993 +0000 UTC m=+1391.786450423" lastFinishedPulling="2026-01-25 00:32:35.123778934 +0000 UTC m=+1394.356769374" observedRunningTime="2026-01-25 00:32:35.631791884 +0000 UTC m=+1394.864782314" watchObservedRunningTime="2026-01-25 00:32:35.634594221 +0000 UTC m=+1394.867584661" Jan 25 00:32:36 crc kubenswrapper[4947]: I0125 00:32:36.903226 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020482 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020545 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020577 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020621 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020655 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020681 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020713 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020761 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5rgf\" (UniqueName: \"kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020785 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020824 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020845 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.020869 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run\") pod \"5e48b97d-b6da-46e1-805a-1573652be38c\" (UID: \"5e48b97d-b6da-46e1-805a-1573652be38c\") " Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.021239 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.022002 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.022004 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.022598 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.022599 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.023152 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.024930 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.029053 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.029494 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.030006 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf" (OuterVolumeSpecName: "kube-api-access-r5rgf") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "kube-api-access-r5rgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.122922 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.122975 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.122990 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123003 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5e48b97d-b6da-46e1-805a-1573652be38c-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123014 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123027 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5rgf\" (UniqueName: \"kubernetes.io/projected/5e48b97d-b6da-46e1-805a-1573652be38c-kube-api-access-r5rgf\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123042 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123053 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/5e48b97d-b6da-46e1-805a-1573652be38c-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123064 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5e48b97d-b6da-46e1-805a-1573652be38c-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.123075 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.140803 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.224691 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.608236 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5e48b97d-b6da-46e1-805a-1573652be38c","Type":"ContainerDied","Data":"41f32d7801129b67e6b416864000e59d2e61a76199bac60f5ea8a250a0e45fd8"} Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.608276 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41f32d7801129b67e6b416864000e59d2e61a76199bac60f5ea8a250a0e45fd8" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.608335 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.809852 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5e48b97d-b6da-46e1-805a-1573652be38c" (UID: "5e48b97d-b6da-46e1-805a-1573652be38c"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:37 crc kubenswrapper[4947]: I0125 00:32:37.834769 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5e48b97d-b6da-46e1-805a-1573652be38c-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:39 crc kubenswrapper[4947]: I0125 00:32:39.773225 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:39 crc kubenswrapper[4947]: I0125 00:32:39.773629 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:39 crc kubenswrapper[4947]: I0125 00:32:39.817931 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:40 crc kubenswrapper[4947]: I0125 00:32:40.683802 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:40 crc kubenswrapper[4947]: I0125 00:32:40.839056 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.044367 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:41 crc kubenswrapper[4947]: E0125 00:32:41.045060 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="manage-dockerfile" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.045081 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="manage-dockerfile" Jan 25 00:32:41 crc kubenswrapper[4947]: E0125 00:32:41.045120 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="git-clone" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.045164 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="git-clone" Jan 25 00:32:41 crc kubenswrapper[4947]: E0125 00:32:41.045176 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="docker-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.045184 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="docker-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.045376 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e48b97d-b6da-46e1-805a-1573652be38c" containerName="docker-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.046170 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.047988 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.048160 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.048653 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.051615 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.065844 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179647 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179725 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179746 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97rtz\" (UniqueName: \"kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179780 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179798 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179888 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179919 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179947 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.179981 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.180001 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.180028 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.180052 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281441 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281508 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281539 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97rtz\" (UniqueName: \"kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281589 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281611 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281661 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281694 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281726 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281745 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281767 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281797 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.281827 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.282418 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.282720 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.282748 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.282843 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.283033 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.283179 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.283259 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.283405 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.283455 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.287259 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.288156 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.305982 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97rtz\" (UniqueName: \"kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.376412 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.904327 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.964507 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:41 crc kubenswrapper[4947]: I0125 00:32:41.964675 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:42 crc kubenswrapper[4947]: I0125 00:32:42.025546 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:42 crc kubenswrapper[4947]: I0125 00:32:42.649500 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerStarted","Data":"26747227c318ee56f01386f2aef39290d6a1c144b6fa6c040fda3c0ee559d715"} Jan 25 00:32:42 crc kubenswrapper[4947]: I0125 00:32:42.649927 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-khvm7" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="registry-server" containerID="cri-o://a4bcd8438bd8d926fe64439908fcec662adf3b56502ee9d48c75bf06bbf3a249" gracePeriod=2 Jan 25 00:32:42 crc kubenswrapper[4947]: I0125 00:32:42.709576 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:43 crc kubenswrapper[4947]: I0125 00:32:43.244682 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:43 crc kubenswrapper[4947]: I0125 00:32:43.658867 4947 generic.go:334] "Generic (PLEG): container finished" podID="af43bb2d-e763-416d-803d-16fec7332771" containerID="a4bcd8438bd8d926fe64439908fcec662adf3b56502ee9d48c75bf06bbf3a249" exitCode=0 Jan 25 00:32:43 crc kubenswrapper[4947]: I0125 00:32:43.658952 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerDied","Data":"a4bcd8438bd8d926fe64439908fcec662adf3b56502ee9d48c75bf06bbf3a249"} Jan 25 00:32:43 crc kubenswrapper[4947]: I0125 00:32:43.660513 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerStarted","Data":"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5"} Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.246940 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.441983 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lph5f\" (UniqueName: \"kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f\") pod \"af43bb2d-e763-416d-803d-16fec7332771\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.442064 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content\") pod \"af43bb2d-e763-416d-803d-16fec7332771\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.442092 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities\") pod \"af43bb2d-e763-416d-803d-16fec7332771\" (UID: \"af43bb2d-e763-416d-803d-16fec7332771\") " Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.443354 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities" (OuterVolumeSpecName: "utilities") pod "af43bb2d-e763-416d-803d-16fec7332771" (UID: "af43bb2d-e763-416d-803d-16fec7332771"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.447751 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f" (OuterVolumeSpecName: "kube-api-access-lph5f") pod "af43bb2d-e763-416d-803d-16fec7332771" (UID: "af43bb2d-e763-416d-803d-16fec7332771"). InnerVolumeSpecName "kube-api-access-lph5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.523415 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af43bb2d-e763-416d-803d-16fec7332771" (UID: "af43bb2d-e763-416d-803d-16fec7332771"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.544163 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lph5f\" (UniqueName: \"kubernetes.io/projected/af43bb2d-e763-416d-803d-16fec7332771-kube-api-access-lph5f\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.544237 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.544294 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af43bb2d-e763-416d-803d-16fec7332771-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.671480 4947 generic.go:334] "Generic (PLEG): container finished" podID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerID="d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5" exitCode=0 Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.671541 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerDied","Data":"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5"} Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.675405 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-b77dd" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="registry-server" containerID="cri-o://6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c" gracePeriod=2 Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.675569 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-khvm7" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.676770 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-khvm7" event={"ID":"af43bb2d-e763-416d-803d-16fec7332771","Type":"ContainerDied","Data":"d56d5895bb34a9485e4d216be271276937d7d7a301e7d79984299b26a86ea042"} Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.676813 4947 scope.go:117] "RemoveContainer" containerID="a4bcd8438bd8d926fe64439908fcec662adf3b56502ee9d48c75bf06bbf3a249" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.713112 4947 scope.go:117] "RemoveContainer" containerID="eed5d033314da1bcbc2e52cf9e1e2bd3420778ab20933b2ef804e302ab801e05" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.764261 4947 scope.go:117] "RemoveContainer" containerID="3b5d100dc6d315ff476d1d5918266454374cbb046c8dd09419aacaed19996070" Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.766671 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:44 crc kubenswrapper[4947]: I0125 00:32:44.772509 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-khvm7"] Jan 25 00:32:45 crc kubenswrapper[4947]: I0125 00:32:45.103805 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af43bb2d-e763-416d-803d-16fec7332771" path="/var/lib/kubelet/pods/af43bb2d-e763-416d-803d-16fec7332771/volumes" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.254432 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.369930 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities\") pod \"70732187-2d74-4e3a-9e95-79c235f70b3b\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.370047 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content\") pod \"70732187-2d74-4e3a-9e95-79c235f70b3b\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.370223 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4klrq\" (UniqueName: \"kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq\") pod \"70732187-2d74-4e3a-9e95-79c235f70b3b\" (UID: \"70732187-2d74-4e3a-9e95-79c235f70b3b\") " Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.371302 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities" (OuterVolumeSpecName: "utilities") pod "70732187-2d74-4e3a-9e95-79c235f70b3b" (UID: "70732187-2d74-4e3a-9e95-79c235f70b3b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.377359 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq" (OuterVolumeSpecName: "kube-api-access-4klrq") pod "70732187-2d74-4e3a-9e95-79c235f70b3b" (UID: "70732187-2d74-4e3a-9e95-79c235f70b3b"). InnerVolumeSpecName "kube-api-access-4klrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.472052 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4klrq\" (UniqueName: \"kubernetes.io/projected/70732187-2d74-4e3a-9e95-79c235f70b3b-kube-api-access-4klrq\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.472090 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.694843 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerStarted","Data":"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1"} Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.698663 4947 generic.go:334] "Generic (PLEG): container finished" podID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerID="6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c" exitCode=0 Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.698738 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerDied","Data":"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c"} Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.698781 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-b77dd" event={"ID":"70732187-2d74-4e3a-9e95-79c235f70b3b","Type":"ContainerDied","Data":"a9c1e2ac91c88bb7647a2359a8af63858c17c41cf83a204cddc41485659caf41"} Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.698813 4947 scope.go:117] "RemoveContainer" containerID="6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c" Jan 25 00:32:46 crc kubenswrapper[4947]: I0125 00:32:46.698999 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-b77dd" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.185515 4947 scope.go:117] "RemoveContainer" containerID="c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.225577 4947 scope.go:117] "RemoveContainer" containerID="c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.266809 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=6.266776044 podStartE2EDuration="6.266776044s" podCreationTimestamp="2026-01-25 00:32:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:32:47.262114833 +0000 UTC m=+1406.495105313" watchObservedRunningTime="2026-01-25 00:32:47.266776044 +0000 UTC m=+1406.499766484" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.269710 4947 scope.go:117] "RemoveContainer" containerID="6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c" Jan 25 00:32:47 crc kubenswrapper[4947]: E0125 00:32:47.270311 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c\": container with ID starting with 6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c not found: ID does not exist" containerID="6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.270358 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c"} err="failed to get container status \"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c\": rpc error: code = NotFound desc = could not find container \"6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c\": container with ID starting with 6b6c23491afe2dd8e0391c40128ba92ddb3ffab0b8d78076d0bd5198424e911c not found: ID does not exist" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.270390 4947 scope.go:117] "RemoveContainer" containerID="c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de" Jan 25 00:32:47 crc kubenswrapper[4947]: E0125 00:32:47.270913 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de\": container with ID starting with c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de not found: ID does not exist" containerID="c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.270942 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de"} err="failed to get container status \"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de\": rpc error: code = NotFound desc = could not find container \"c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de\": container with ID starting with c6a105f7c39acfe05ab88437cff01895af8541ce8b9c12597c0a6f71c84da4de not found: ID does not exist" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.270956 4947 scope.go:117] "RemoveContainer" containerID="c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6" Jan 25 00:32:47 crc kubenswrapper[4947]: E0125 00:32:47.271199 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6\": container with ID starting with c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6 not found: ID does not exist" containerID="c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6" Jan 25 00:32:47 crc kubenswrapper[4947]: I0125 00:32:47.271220 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6"} err="failed to get container status \"c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6\": rpc error: code = NotFound desc = could not find container \"c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6\": container with ID starting with c6743a7e7ecc0059b0505718650e4f5040a17357b9eb78850169d09d180bc4b6 not found: ID does not exist" Jan 25 00:32:48 crc kubenswrapper[4947]: I0125 00:32:48.604284 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70732187-2d74-4e3a-9e95-79c235f70b3b" (UID: "70732187-2d74-4e3a-9e95-79c235f70b3b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:48 crc kubenswrapper[4947]: I0125 00:32:48.606879 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70732187-2d74-4e3a-9e95-79c235f70b3b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:48 crc kubenswrapper[4947]: I0125 00:32:48.838762 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:48 crc kubenswrapper[4947]: I0125 00:32:48.846519 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-b77dd"] Jan 25 00:32:49 crc kubenswrapper[4947]: I0125 00:32:49.105963 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" path="/var/lib/kubelet/pods/70732187-2d74-4e3a-9e95-79c235f70b3b/volumes" Jan 25 00:32:51 crc kubenswrapper[4947]: I0125 00:32:51.847404 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:51 crc kubenswrapper[4947]: I0125 00:32:51.848541 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="docker-build" containerID="cri-o://10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1" gracePeriod=30 Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.301835 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_45b0bbe8-2222-47ff-b23d-f2e571562df0/docker-build/0.log" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.302832 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483150 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483275 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483307 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483357 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483395 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483421 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483447 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483501 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483561 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483555 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483588 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483636 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97rtz\" (UniqueName: \"kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483664 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.483699 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root\") pod \"45b0bbe8-2222-47ff-b23d-f2e571562df0\" (UID: \"45b0bbe8-2222-47ff-b23d-f2e571562df0\") " Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.484114 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.484162 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.484593 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.484960 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.485264 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.485290 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.486238 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.486344 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.495305 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.495398 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz" (OuterVolumeSpecName: "kube-api-access-97rtz") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "kube-api-access-97rtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.499004 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.535011 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "45b0bbe8-2222-47ff-b23d-f2e571562df0" (UID: "45b0bbe8-2222-47ff-b23d-f2e571562df0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585518 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97rtz\" (UniqueName: \"kubernetes.io/projected/45b0bbe8-2222-47ff-b23d-f2e571562df0-kube-api-access-97rtz\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585564 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585578 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585591 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585603 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585617 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585630 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585643 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585655 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/45b0bbe8-2222-47ff-b23d-f2e571562df0-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.585666 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45b0bbe8-2222-47ff-b23d-f2e571562df0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.768277 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_45b0bbe8-2222-47ff-b23d-f2e571562df0/docker-build/0.log" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.769306 4947 generic.go:334] "Generic (PLEG): container finished" podID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerID="10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1" exitCode=1 Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.769371 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerDied","Data":"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1"} Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.769422 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"45b0bbe8-2222-47ff-b23d-f2e571562df0","Type":"ContainerDied","Data":"26747227c318ee56f01386f2aef39290d6a1c144b6fa6c040fda3c0ee559d715"} Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.769446 4947 scope.go:117] "RemoveContainer" containerID="10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.769459 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.828754 4947 scope.go:117] "RemoveContainer" containerID="d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.837298 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.850861 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.865851 4947 scope.go:117] "RemoveContainer" containerID="10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1" Jan 25 00:32:52 crc kubenswrapper[4947]: E0125 00:32:52.866457 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1\": container with ID starting with 10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1 not found: ID does not exist" containerID="10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.866498 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1"} err="failed to get container status \"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1\": rpc error: code = NotFound desc = could not find container \"10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1\": container with ID starting with 10cfbfd273a389f2d3bc2bd418daae0276661ae26d3ea485d785387716756cc1 not found: ID does not exist" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.866525 4947 scope.go:117] "RemoveContainer" containerID="d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5" Jan 25 00:32:52 crc kubenswrapper[4947]: E0125 00:32:52.867191 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5\": container with ID starting with d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5 not found: ID does not exist" containerID="d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5" Jan 25 00:32:52 crc kubenswrapper[4947]: I0125 00:32:52.867270 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5"} err="failed to get container status \"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5\": rpc error: code = NotFound desc = could not find container \"d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5\": container with ID starting with d07fd80933ae56a5d55a4b6d74e59410bbcab1eb969ab1d509535830b3547eb5 not found: ID does not exist" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.106852 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" path="/var/lib/kubelet/pods/45b0bbe8-2222-47ff-b23d-f2e571562df0/volumes" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495107 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495423 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="manage-dockerfile" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495440 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="manage-dockerfile" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495455 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="extract-utilities" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495463 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="extract-utilities" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495473 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="extract-content" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495481 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="extract-content" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495492 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495500 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495507 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="docker-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495515 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="docker-build" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495527 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="extract-utilities" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495535 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="extract-utilities" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495550 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495557 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: E0125 00:32:53.495569 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="extract-content" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495576 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="extract-content" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495701 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="af43bb2d-e763-416d-803d-16fec7332771" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495715 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b0bbe8-2222-47ff-b23d-f2e571562df0" containerName="docker-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.495734 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="70732187-2d74-4e3a-9e95-79c235f70b3b" containerName="registry-server" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.496794 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.504442 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-jd52j" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.504820 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.504996 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.505064 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.512496 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599315 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599425 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599470 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599523 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599552 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvdcj\" (UniqueName: \"kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599664 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599735 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599771 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599813 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599862 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599883 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.599939 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.701713 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.701831 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.701871 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.701894 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.701918 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.702489 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.702545 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703029 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703108 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703195 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703220 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703253 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvdcj\" (UniqueName: \"kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703301 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703337 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703352 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703377 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703749 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.703931 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.704207 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.704164 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.704380 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.708459 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.709235 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.723098 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvdcj\" (UniqueName: \"kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:53 crc kubenswrapper[4947]: I0125 00:32:53.816585 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:32:54 crc kubenswrapper[4947]: I0125 00:32:54.331389 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 25 00:32:54 crc kubenswrapper[4947]: I0125 00:32:54.791279 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerStarted","Data":"12a4a876b1c059d28d4620add3f0e9fdabe33bc5bab3e19249ea10cf4e371664"} Jan 25 00:32:54 crc kubenswrapper[4947]: I0125 00:32:54.791747 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerStarted","Data":"6d75a8acc54f852c0df652a21678fabe1ea78a90eda755e5dd6934eaed7a164f"} Jan 25 00:32:55 crc kubenswrapper[4947]: I0125 00:32:55.800562 4947 generic.go:334] "Generic (PLEG): container finished" podID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerID="12a4a876b1c059d28d4620add3f0e9fdabe33bc5bab3e19249ea10cf4e371664" exitCode=0 Jan 25 00:32:55 crc kubenswrapper[4947]: I0125 00:32:55.800622 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerDied","Data":"12a4a876b1c059d28d4620add3f0e9fdabe33bc5bab3e19249ea10cf4e371664"} Jan 25 00:32:56 crc kubenswrapper[4947]: I0125 00:32:56.812438 4947 generic.go:334] "Generic (PLEG): container finished" podID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerID="1fa4cda2ff1aafe3e40d8f14aab93a93a41bce1589d482eaee445fe7dda68118" exitCode=0 Jan 25 00:32:56 crc kubenswrapper[4947]: I0125 00:32:56.812541 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerDied","Data":"1fa4cda2ff1aafe3e40d8f14aab93a93a41bce1589d482eaee445fe7dda68118"} Jan 25 00:32:56 crc kubenswrapper[4947]: I0125 00:32:56.869024 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_1401febc-016f-410e-83fb-dd8e2eaead5e/manage-dockerfile/0.log" Jan 25 00:32:57 crc kubenswrapper[4947]: I0125 00:32:57.826310 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerStarted","Data":"c65f939a7d3cf1486e929917d5a7eac5a1d5bb6038d4ad4228acb5ede1bb53de"} Jan 25 00:32:57 crc kubenswrapper[4947]: I0125 00:32:57.887412 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=4.887332311 podStartE2EDuration="4.887332311s" podCreationTimestamp="2026-01-25 00:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:32:57.872913926 +0000 UTC m=+1417.105904396" watchObservedRunningTime="2026-01-25 00:32:57.887332311 +0000 UTC m=+1417.120322821" Jan 25 00:33:53 crc kubenswrapper[4947]: I0125 00:33:53.285245 4947 generic.go:334] "Generic (PLEG): container finished" podID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerID="c65f939a7d3cf1486e929917d5a7eac5a1d5bb6038d4ad4228acb5ede1bb53de" exitCode=0 Jan 25 00:33:53 crc kubenswrapper[4947]: I0125 00:33:53.285332 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerDied","Data":"c65f939a7d3cf1486e929917d5a7eac5a1d5bb6038d4ad4228acb5ede1bb53de"} Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.627916 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695690 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695737 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695768 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695815 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695834 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695856 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695904 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695934 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695969 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.695992 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvdcj\" (UniqueName: \"kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.696010 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.696027 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache\") pod \"1401febc-016f-410e-83fb-dd8e2eaead5e\" (UID: \"1401febc-016f-410e-83fb-dd8e2eaead5e\") " Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.696295 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.696802 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.697243 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.697629 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.697837 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.698386 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.699334 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.701270 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push" (OuterVolumeSpecName: "builder-dockercfg-jd52j-push") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "builder-dockercfg-jd52j-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.702447 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj" (OuterVolumeSpecName: "kube-api-access-jvdcj") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "kube-api-access-jvdcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.708286 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull" (OuterVolumeSpecName: "builder-dockercfg-jd52j-pull") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "builder-dockercfg-jd52j-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797612 4947 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797927 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvdcj\" (UniqueName: \"kubernetes.io/projected/1401febc-016f-410e-83fb-dd8e2eaead5e-kube-api-access-jvdcj\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797948 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797959 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-pull\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-pull\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797970 4947 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.797984 4947 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.798035 4947 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1401febc-016f-410e-83fb-dd8e2eaead5e-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.798048 4947 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.798060 4947 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-jd52j-push\" (UniqueName: \"kubernetes.io/secret/1401febc-016f-410e-83fb-dd8e2eaead5e-builder-dockercfg-jd52j-push\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.798073 4947 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/1401febc-016f-410e-83fb-dd8e2eaead5e-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.799604 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:33:54 crc kubenswrapper[4947]: I0125 00:33:54.899846 4947 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 25 00:33:55 crc kubenswrapper[4947]: I0125 00:33:55.304510 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"1401febc-016f-410e-83fb-dd8e2eaead5e","Type":"ContainerDied","Data":"6d75a8acc54f852c0df652a21678fabe1ea78a90eda755e5dd6934eaed7a164f"} Jan 25 00:33:55 crc kubenswrapper[4947]: I0125 00:33:55.304553 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d75a8acc54f852c0df652a21678fabe1ea78a90eda755e5dd6934eaed7a164f" Jan 25 00:33:55 crc kubenswrapper[4947]: I0125 00:33:55.304580 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 25 00:33:55 crc kubenswrapper[4947]: I0125 00:33:55.593689 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "1401febc-016f-410e-83fb-dd8e2eaead5e" (UID: "1401febc-016f-410e-83fb-dd8e2eaead5e"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:33:55 crc kubenswrapper[4947]: I0125 00:33:55.609618 4947 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/1401febc-016f-410e-83fb-dd8e2eaead5e-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.281357 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-859d6d5949-k28x7"] Jan 25 00:34:00 crc kubenswrapper[4947]: E0125 00:34:00.282063 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="git-clone" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.282084 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="git-clone" Jan 25 00:34:00 crc kubenswrapper[4947]: E0125 00:34:00.282105 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="manage-dockerfile" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.282122 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="manage-dockerfile" Jan 25 00:34:00 crc kubenswrapper[4947]: E0125 00:34:00.282192 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="docker-build" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.282205 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="docker-build" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.282413 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1401febc-016f-410e-83fb-dd8e2eaead5e" containerName="docker-build" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.283079 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.285648 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-tsdlr" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.303000 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-859d6d5949-k28x7"] Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.377862 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbsz8\" (UniqueName: \"kubernetes.io/projected/043c18a1-e602-4917-b73d-5331da5ee62f-kube-api-access-sbsz8\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.377978 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/043c18a1-e602-4917-b73d-5331da5ee62f-runner\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.479489 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/043c18a1-e602-4917-b73d-5331da5ee62f-runner\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.479661 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbsz8\" (UniqueName: \"kubernetes.io/projected/043c18a1-e602-4917-b73d-5331da5ee62f-kube-api-access-sbsz8\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.480544 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/043c18a1-e602-4917-b73d-5331da5ee62f-runner\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.502426 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbsz8\" (UniqueName: \"kubernetes.io/projected/043c18a1-e602-4917-b73d-5331da5ee62f-kube-api-access-sbsz8\") pod \"smart-gateway-operator-859d6d5949-k28x7\" (UID: \"043c18a1-e602-4917-b73d-5331da5ee62f\") " pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:00 crc kubenswrapper[4947]: I0125 00:34:00.611617 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" Jan 25 00:34:01 crc kubenswrapper[4947]: I0125 00:34:01.026017 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-859d6d5949-k28x7"] Jan 25 00:34:01 crc kubenswrapper[4947]: W0125 00:34:01.035316 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod043c18a1_e602_4917_b73d_5331da5ee62f.slice/crio-f5d9097adf7dcd2c64f407e053119c0e30869753ea7dd53a52ad783ee6d7d90d WatchSource:0}: Error finding container f5d9097adf7dcd2c64f407e053119c0e30869753ea7dd53a52ad783ee6d7d90d: Status 404 returned error can't find the container with id f5d9097adf7dcd2c64f407e053119c0e30869753ea7dd53a52ad783ee6d7d90d Jan 25 00:34:01 crc kubenswrapper[4947]: I0125 00:34:01.357541 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" event={"ID":"043c18a1-e602-4917-b73d-5331da5ee62f","Type":"ContainerStarted","Data":"f5d9097adf7dcd2c64f407e053119c0e30869753ea7dd53a52ad783ee6d7d90d"} Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.347958 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4"] Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.351515 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.354009 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-lz5n5" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.364226 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4"] Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.374464 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq9cv\" (UniqueName: \"kubernetes.io/projected/ccea2ce4-d212-4599-a152-5a2d53366128-kube-api-access-tq9cv\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.375548 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/ccea2ce4-d212-4599-a152-5a2d53366128-runner\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.477079 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/ccea2ce4-d212-4599-a152-5a2d53366128-runner\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.477213 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq9cv\" (UniqueName: \"kubernetes.io/projected/ccea2ce4-d212-4599-a152-5a2d53366128-kube-api-access-tq9cv\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.477848 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/ccea2ce4-d212-4599-a152-5a2d53366128-runner\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.499788 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq9cv\" (UniqueName: \"kubernetes.io/projected/ccea2ce4-d212-4599-a152-5a2d53366128-kube-api-access-tq9cv\") pod \"service-telemetry-operator-7b5f5cc44d-grff4\" (UID: \"ccea2ce4-d212-4599-a152-5a2d53366128\") " pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:06 crc kubenswrapper[4947]: I0125 00:34:06.671437 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" Jan 25 00:34:12 crc kubenswrapper[4947]: I0125 00:34:12.918576 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4"] Jan 25 00:34:13 crc kubenswrapper[4947]: I0125 00:34:13.479396 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" event={"ID":"ccea2ce4-d212-4599-a152-5a2d53366128","Type":"ContainerStarted","Data":"125a40cb0040bffb623a7e38be48fd96835452f5a833fa6e3d95907548d87645"} Jan 25 00:34:16 crc kubenswrapper[4947]: E0125 00:34:16.416483 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:stable-1.5" Jan 25 00:34:16 crc kubenswrapper[4947]: E0125 00:34:16.416855 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:stable-1.5,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1769301236,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sbsz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-859d6d5949-k28x7_service-telemetry(043c18a1-e602-4917-b73d-5331da5ee62f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 00:34:16 crc kubenswrapper[4947]: E0125 00:34:16.418057 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" podUID="043c18a1-e602-4917-b73d-5331da5ee62f" Jan 25 00:34:16 crc kubenswrapper[4947]: E0125 00:34:16.501323 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:stable-1.5\\\"\"" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" podUID="043c18a1-e602-4917-b73d-5331da5ee62f" Jan 25 00:34:17 crc kubenswrapper[4947]: I0125 00:34:17.073145 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:34:17 crc kubenswrapper[4947]: I0125 00:34:17.073210 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:34:21 crc kubenswrapper[4947]: I0125 00:34:21.550969 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" event={"ID":"ccea2ce4-d212-4599-a152-5a2d53366128","Type":"ContainerStarted","Data":"8aeab85df7d9b6543e8fe54d28dc82c81dcbc638388c934892da45eb5e34af11"} Jan 25 00:34:21 crc kubenswrapper[4947]: I0125 00:34:21.628414 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-7b5f5cc44d-grff4" podStartSLOduration=8.095226514 podStartE2EDuration="15.628391232s" podCreationTimestamp="2026-01-25 00:34:06 +0000 UTC" firstStartedPulling="2026-01-25 00:34:13.355257393 +0000 UTC m=+1492.588247843" lastFinishedPulling="2026-01-25 00:34:20.888422121 +0000 UTC m=+1500.121412561" observedRunningTime="2026-01-25 00:34:21.624060448 +0000 UTC m=+1500.857050898" watchObservedRunningTime="2026-01-25 00:34:21.628391232 +0000 UTC m=+1500.861381672" Jan 25 00:34:31 crc kubenswrapper[4947]: I0125 00:34:31.628944 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" event={"ID":"043c18a1-e602-4917-b73d-5331da5ee62f","Type":"ContainerStarted","Data":"781fae171b2127274dfdc7210acab1db35fe3f1b8ff5f0595696d270087274c4"} Jan 25 00:34:31 crc kubenswrapper[4947]: I0125 00:34:31.670240 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-859d6d5949-k28x7" podStartSLOduration=2.088789645 podStartE2EDuration="31.670221045s" podCreationTimestamp="2026-01-25 00:34:00 +0000 UTC" firstStartedPulling="2026-01-25 00:34:01.038971618 +0000 UTC m=+1480.271962058" lastFinishedPulling="2026-01-25 00:34:30.620403018 +0000 UTC m=+1509.853393458" observedRunningTime="2026-01-25 00:34:31.663834051 +0000 UTC m=+1510.896824501" watchObservedRunningTime="2026-01-25 00:34:31.670221045 +0000 UTC m=+1510.903211485" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.197874 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.199841 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.202112 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.202397 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.202526 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-ktwfn" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.202647 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.204385 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.204859 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.207344 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.215619 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359320 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359385 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359404 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359424 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359509 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359582 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vf7l\" (UniqueName: \"kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.359608 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461045 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461182 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vf7l\" (UniqueName: \"kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461242 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461285 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461346 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461382 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.461418 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.462551 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.468587 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.469235 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.481540 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.484543 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.486025 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.506643 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vf7l\" (UniqueName: \"kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l\") pod \"default-interconnect-68864d46cb-pvgb8\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.550650 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:34:45 crc kubenswrapper[4947]: I0125 00:34:45.972464 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:34:46 crc kubenswrapper[4947]: I0125 00:34:46.735575 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" event={"ID":"7b062645-435c-4461-93cd-8cbe7cd8e733","Type":"ContainerStarted","Data":"1e919ed0615f4f4827a60cadd255f60dc5f4870e6c12a5fe6e5916224ada10bc"} Jan 25 00:34:47 crc kubenswrapper[4947]: I0125 00:34:47.072640 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:34:47 crc kubenswrapper[4947]: I0125 00:34:47.072701 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:34:54 crc kubenswrapper[4947]: I0125 00:34:54.796045 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" event={"ID":"7b062645-435c-4461-93cd-8cbe7cd8e733","Type":"ContainerStarted","Data":"99188ef6337d790fc8a682f69dd76728f1c088acbf4f5174dab2cdba95af3406"} Jan 25 00:34:54 crc kubenswrapper[4947]: I0125 00:34:54.821035 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" podStartSLOduration=1.222919649 podStartE2EDuration="9.821006657s" podCreationTimestamp="2026-01-25 00:34:45 +0000 UTC" firstStartedPulling="2026-01-25 00:34:45.992858811 +0000 UTC m=+1525.225849261" lastFinishedPulling="2026-01-25 00:34:54.590945829 +0000 UTC m=+1533.823936269" observedRunningTime="2026-01-25 00:34:54.815202878 +0000 UTC m=+1534.048193318" watchObservedRunningTime="2026-01-25 00:34:54.821006657 +0000 UTC m=+1534.053997097" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.472491 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.475644 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.478458 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-4vrfm" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.479454 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.479733 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.479831 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.479975 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.480164 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.481363 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.482823 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.482887 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.488350 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.491483 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667409 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667536 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d82b42a-6236-43af-8190-d28e96b2b933-config-out\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667636 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667754 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667798 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w22zj\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-kube-api-access-w22zj\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667835 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667875 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667932 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.667974 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-tls-assets\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.668005 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.668046 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-web-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.668091 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769104 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769165 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769202 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d82b42a-6236-43af-8190-d28e96b2b933-config-out\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769233 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769257 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769278 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w22zj\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-kube-api-access-w22zj\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769293 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769312 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769338 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769357 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-tls-assets\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769373 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.769392 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-web-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.770628 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: E0125 00:34:57.770841 4947 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 25 00:34:57 crc kubenswrapper[4947]: E0125 00:34:57.770931 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls podName:3d82b42a-6236-43af-8190-d28e96b2b933 nodeName:}" failed. No retries permitted until 2026-01-25 00:34:58.270909162 +0000 UTC m=+1537.503899692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3d82b42a-6236-43af-8190-d28e96b2b933") : secret "default-prometheus-proxy-tls" not found Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.771490 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.771586 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.773346 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d82b42a-6236-43af-8190-d28e96b2b933-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.774901 4947 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.774955 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f0750656d621eabcb0a2ba208da554ab8fdfeaf937caaa46473a227c8cb68ab8/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.776515 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-tls-assets\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.776939 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.780039 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d82b42a-6236-43af-8190-d28e96b2b933-config-out\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.780176 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-web-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.780810 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-config\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.796717 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w22zj\" (UniqueName: \"kubernetes.io/projected/3d82b42a-6236-43af-8190-d28e96b2b933-kube-api-access-w22zj\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:57 crc kubenswrapper[4947]: I0125 00:34:57.829949 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d87f148a-2eeb-416b-ae9b-e7d61d2c7097\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:58 crc kubenswrapper[4947]: I0125 00:34:58.275531 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:58 crc kubenswrapper[4947]: E0125 00:34:58.275707 4947 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 25 00:34:58 crc kubenswrapper[4947]: E0125 00:34:58.275802 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls podName:3d82b42a-6236-43af-8190-d28e96b2b933 nodeName:}" failed. No retries permitted until 2026-01-25 00:34:59.275777785 +0000 UTC m=+1538.508768235 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "3d82b42a-6236-43af-8190-d28e96b2b933") : secret "default-prometheus-proxy-tls" not found Jan 25 00:34:59 crc kubenswrapper[4947]: I0125 00:34:59.289784 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:59 crc kubenswrapper[4947]: I0125 00:34:59.295774 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/3d82b42a-6236-43af-8190-d28e96b2b933-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"3d82b42a-6236-43af-8190-d28e96b2b933\") " pod="service-telemetry/prometheus-default-0" Jan 25 00:34:59 crc kubenswrapper[4947]: I0125 00:34:59.296159 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 25 00:34:59 crc kubenswrapper[4947]: I0125 00:34:59.763738 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 25 00:34:59 crc kubenswrapper[4947]: W0125 00:34:59.785780 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d82b42a_6236_43af_8190_d28e96b2b933.slice/crio-886536085f67f9860c669e6a0659f8212d2d57325cc2b2791e75511cdff8bfba WatchSource:0}: Error finding container 886536085f67f9860c669e6a0659f8212d2d57325cc2b2791e75511cdff8bfba: Status 404 returned error can't find the container with id 886536085f67f9860c669e6a0659f8212d2d57325cc2b2791e75511cdff8bfba Jan 25 00:34:59 crc kubenswrapper[4947]: I0125 00:34:59.841336 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerStarted","Data":"886536085f67f9860c669e6a0659f8212d2d57325cc2b2791e75511cdff8bfba"} Jan 25 00:35:03 crc kubenswrapper[4947]: I0125 00:35:03.879172 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerStarted","Data":"a045c02ba900e38ed58a3c74b6d3dc84912734d51ba0245f945e11a4825116da"} Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.133697 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-4gkbj"] Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.134877 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.146637 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-4gkbj"] Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.302148 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nt5vj\" (UniqueName: \"kubernetes.io/projected/b6cfc9d0-598c-4149-8a38-ec02ced8d2b8-kube-api-access-nt5vj\") pod \"default-snmp-webhook-6856cfb745-4gkbj\" (UID: \"b6cfc9d0-598c-4149-8a38-ec02ced8d2b8\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.403838 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nt5vj\" (UniqueName: \"kubernetes.io/projected/b6cfc9d0-598c-4149-8a38-ec02ced8d2b8-kube-api-access-nt5vj\") pod \"default-snmp-webhook-6856cfb745-4gkbj\" (UID: \"b6cfc9d0-598c-4149-8a38-ec02ced8d2b8\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.436022 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nt5vj\" (UniqueName: \"kubernetes.io/projected/b6cfc9d0-598c-4149-8a38-ec02ced8d2b8-kube-api-access-nt5vj\") pod \"default-snmp-webhook-6856cfb745-4gkbj\" (UID: \"b6cfc9d0-598c-4149-8a38-ec02ced8d2b8\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.455871 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.898229 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-4gkbj"] Jan 25 00:35:07 crc kubenswrapper[4947]: W0125 00:35:07.905314 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6cfc9d0_598c_4149_8a38_ec02ced8d2b8.slice/crio-3400200c7144f00cbdb00e101b7366d32b06932b58fd8100cb7d507259dd1c9d WatchSource:0}: Error finding container 3400200c7144f00cbdb00e101b7366d32b06932b58fd8100cb7d507259dd1c9d: Status 404 returned error can't find the container with id 3400200c7144f00cbdb00e101b7366d32b06932b58fd8100cb7d507259dd1c9d Jan 25 00:35:07 crc kubenswrapper[4947]: I0125 00:35:07.924549 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" event={"ID":"b6cfc9d0-598c-4149-8a38-ec02ced8d2b8","Type":"ContainerStarted","Data":"3400200c7144f00cbdb00e101b7366d32b06932b58fd8100cb7d507259dd1c9d"} Jan 25 00:35:10 crc kubenswrapper[4947]: I0125 00:35:10.946357 4947 generic.go:334] "Generic (PLEG): container finished" podID="3d82b42a-6236-43af-8190-d28e96b2b933" containerID="a045c02ba900e38ed58a3c74b6d3dc84912734d51ba0245f945e11a4825116da" exitCode=0 Jan 25 00:35:10 crc kubenswrapper[4947]: I0125 00:35:10.946454 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerDied","Data":"a045c02ba900e38ed58a3c74b6d3dc84912734d51ba0245f945e11a4825116da"} Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.159264 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.163031 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.165730 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-dgrg2" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.167863 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.168089 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.168574 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.172088 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.172118 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.177666 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.260918 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-tls-assets\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.260962 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-web-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.260991 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-volume\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261207 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261487 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261643 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brfsp\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-kube-api-access-brfsp\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261835 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-out\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261883 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.261992 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363478 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363543 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363581 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brfsp\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-kube-api-access-brfsp\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363620 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-out\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363657 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363690 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363724 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-tls-assets\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363747 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-web-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: E0125 00:35:11.363761 4947 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.363774 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-volume\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: E0125 00:35:11.363925 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls podName:4888dfdb-3780-4d4b-ad3a-4c1238a72464 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:11.863810764 +0000 UTC m=+1551.096801204 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "4888dfdb-3780-4d4b-ad3a-4c1238a72464") : secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.368454 4947 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.368585 4947 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2606f9dadba909059c2a1a29eaa041493cf1f0540adfd02153a60e509d52714b/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.369808 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-web-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.370153 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-out\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.371337 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.375708 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-config-volume\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.380113 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-tls-assets\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.391101 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.394599 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brfsp\" (UniqueName: \"kubernetes.io/projected/4888dfdb-3780-4d4b-ad3a-4c1238a72464-kube-api-access-brfsp\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.420078 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-36992d4a-ba50-47dc-838a-1aa18f95be08\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: I0125 00:35:11.871098 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:11 crc kubenswrapper[4947]: E0125 00:35:11.871332 4947 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:11 crc kubenswrapper[4947]: E0125 00:35:11.871389 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls podName:4888dfdb-3780-4d4b-ad3a-4c1238a72464 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:12.871371871 +0000 UTC m=+1552.104362321 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "4888dfdb-3780-4d4b-ad3a-4c1238a72464") : secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:12 crc kubenswrapper[4947]: I0125 00:35:12.885565 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:12 crc kubenswrapper[4947]: E0125 00:35:12.885702 4947 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:12 crc kubenswrapper[4947]: E0125 00:35:12.885748 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls podName:4888dfdb-3780-4d4b-ad3a-4c1238a72464 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:14.885735065 +0000 UTC m=+1554.118725505 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "4888dfdb-3780-4d4b-ad3a-4c1238a72464") : secret "default-alertmanager-proxy-tls" not found Jan 25 00:35:14 crc kubenswrapper[4947]: I0125 00:35:14.916022 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:14 crc kubenswrapper[4947]: I0125 00:35:14.923462 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4888dfdb-3780-4d4b-ad3a-4c1238a72464-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"4888dfdb-3780-4d4b-ad3a-4c1238a72464\") " pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:15 crc kubenswrapper[4947]: I0125 00:35:15.090227 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 25 00:35:15 crc kubenswrapper[4947]: I0125 00:35:15.602518 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 25 00:35:15 crc kubenswrapper[4947]: W0125 00:35:15.621313 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4888dfdb_3780_4d4b_ad3a_4c1238a72464.slice/crio-9061de30bd526483a4d244468a93cf327930b9b0716d2865e61c53a22127e111 WatchSource:0}: Error finding container 9061de30bd526483a4d244468a93cf327930b9b0716d2865e61c53a22127e111: Status 404 returned error can't find the container with id 9061de30bd526483a4d244468a93cf327930b9b0716d2865e61c53a22127e111 Jan 25 00:35:15 crc kubenswrapper[4947]: I0125 00:35:15.986080 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerStarted","Data":"9061de30bd526483a4d244468a93cf327930b9b0716d2865e61c53a22127e111"} Jan 25 00:35:15 crc kubenswrapper[4947]: I0125 00:35:15.987853 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" event={"ID":"b6cfc9d0-598c-4149-8a38-ec02ced8d2b8","Type":"ContainerStarted","Data":"698db23d4f00bca2e1edf9c88b01b98fa842e3bad20e6bec687ce223f9530e4a"} Jan 25 00:35:16 crc kubenswrapper[4947]: I0125 00:35:16.009195 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-4gkbj" podStartSLOduration=1.798924161 podStartE2EDuration="9.00917122s" podCreationTimestamp="2026-01-25 00:35:07 +0000 UTC" firstStartedPulling="2026-01-25 00:35:07.920722578 +0000 UTC m=+1547.153713048" lastFinishedPulling="2026-01-25 00:35:15.130969667 +0000 UTC m=+1554.363960107" observedRunningTime="2026-01-25 00:35:16.004434087 +0000 UTC m=+1555.237424527" watchObservedRunningTime="2026-01-25 00:35:16.00917122 +0000 UTC m=+1555.242161660" Jan 25 00:35:17 crc kubenswrapper[4947]: I0125 00:35:17.072565 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:35:17 crc kubenswrapper[4947]: I0125 00:35:17.072903 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:35:17 crc kubenswrapper[4947]: I0125 00:35:17.072941 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:35:17 crc kubenswrapper[4947]: I0125 00:35:17.073455 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:35:17 crc kubenswrapper[4947]: I0125 00:35:17.073508 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" gracePeriod=600 Jan 25 00:35:17 crc kubenswrapper[4947]: E0125 00:35:17.524327 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:35:18 crc kubenswrapper[4947]: I0125 00:35:18.002369 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerStarted","Data":"08ee944f8a5383a5c7dd356f384460db3c5d3bfe49cc6c3d9f6ce605b1501c65"} Jan 25 00:35:18 crc kubenswrapper[4947]: I0125 00:35:18.005174 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" exitCode=0 Jan 25 00:35:18 crc kubenswrapper[4947]: I0125 00:35:18.005214 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f"} Jan 25 00:35:18 crc kubenswrapper[4947]: I0125 00:35:18.005243 4947 scope.go:117] "RemoveContainer" containerID="e5fe371d16c66e0e03d5287fecc388949acddf08b1c19945fd233890f245ebb7" Jan 25 00:35:18 crc kubenswrapper[4947]: I0125 00:35:18.005622 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:35:18 crc kubenswrapper[4947]: E0125 00:35:18.005870 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:35:20 crc kubenswrapper[4947]: I0125 00:35:20.026702 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerStarted","Data":"569db56eeeb09d3ac3f8c9914cd4643b3e8fe4d3b4ba4dcc6aeaa197a2808281"} Jan 25 00:35:22 crc kubenswrapper[4947]: I0125 00:35:22.043536 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerStarted","Data":"9f254d35fa94f4389884a69e4c71a2d2ea013a1091ccffab642ff343ab29c9eb"} Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.817639 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf"] Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.818997 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.820886 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.821060 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.821067 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.821325 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-brmnv" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.834821 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf"] Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.921141 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.921196 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.921216 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.921264 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:24 crc kubenswrapper[4947]: I0125 00:35:24.921285 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frbp4\" (UniqueName: \"kubernetes.io/projected/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-kube-api-access-frbp4\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.023345 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.023460 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frbp4\" (UniqueName: \"kubernetes.io/projected/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-kube-api-access-frbp4\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.023702 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.023784 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.023835 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.024449 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: E0125 00:35:25.024614 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 25 00:35:25 crc kubenswrapper[4947]: E0125 00:35:25.024712 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls podName:0d98fa0e-0a1b-4139-b32a-dbc771dc0939 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:25.524679523 +0000 UTC m=+1564.757670003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" (UID: "0d98fa0e-0a1b-4139-b32a-dbc771dc0939") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.026390 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.037932 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.045920 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frbp4\" (UniqueName: \"kubernetes.io/projected/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-kube-api-access-frbp4\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: I0125 00:35:25.530374 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:25 crc kubenswrapper[4947]: E0125 00:35:25.530510 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 25 00:35:25 crc kubenswrapper[4947]: E0125 00:35:25.530556 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls podName:0d98fa0e-0a1b-4139-b32a-dbc771dc0939 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:26.530541654 +0000 UTC m=+1565.763532094 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" (UID: "0d98fa0e-0a1b-4139-b32a-dbc771dc0939") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 25 00:35:26 crc kubenswrapper[4947]: I0125 00:35:26.544353 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:26 crc kubenswrapper[4947]: I0125 00:35:26.554386 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/0d98fa0e-0a1b-4139-b32a-dbc771dc0939-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf\" (UID: \"0d98fa0e-0a1b-4139-b32a-dbc771dc0939\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:26 crc kubenswrapper[4947]: I0125 00:35:26.687082 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.006823 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n"] Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.008922 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.014078 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.014221 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.027399 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n"] Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.081940 4947 generic.go:334] "Generic (PLEG): container finished" podID="4888dfdb-3780-4d4b-ad3a-4c1238a72464" containerID="08ee944f8a5383a5c7dd356f384460db3c5d3bfe49cc6c3d9f6ce605b1501c65" exitCode=0 Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.082001 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerDied","Data":"08ee944f8a5383a5c7dd356f384460db3c5d3bfe49cc6c3d9f6ce605b1501c65"} Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.153646 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0837a20-7313-4da0-9df6-1ce849d1f029-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.153700 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.153746 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d0837a20-7313-4da0-9df6-1ce849d1f029-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.153826 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx9x9\" (UniqueName: \"kubernetes.io/projected/d0837a20-7313-4da0-9df6-1ce849d1f029-kube-api-access-vx9x9\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.153868 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.180019 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf"] Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.254869 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.255041 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0837a20-7313-4da0-9df6-1ce849d1f029-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.255111 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.255251 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d0837a20-7313-4da0-9df6-1ce849d1f029-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.255528 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx9x9\" (UniqueName: \"kubernetes.io/projected/d0837a20-7313-4da0-9df6-1ce849d1f029-kube-api-access-vx9x9\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: E0125 00:35:27.256418 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 25 00:35:27 crc kubenswrapper[4947]: E0125 00:35:27.256479 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls podName:d0837a20-7313-4da0-9df6-1ce849d1f029 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:27.756463192 +0000 UTC m=+1566.989453632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" (UID: "d0837a20-7313-4da0-9df6-1ce849d1f029") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.256512 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/d0837a20-7313-4da0-9df6-1ce849d1f029-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.257677 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/d0837a20-7313-4da0-9df6-1ce849d1f029-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.262168 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.275713 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx9x9\" (UniqueName: \"kubernetes.io/projected/d0837a20-7313-4da0-9df6-1ce849d1f029-kube-api-access-vx9x9\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: I0125 00:35:27.761442 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:27 crc kubenswrapper[4947]: E0125 00:35:27.761834 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 25 00:35:27 crc kubenswrapper[4947]: E0125 00:35:27.761885 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls podName:d0837a20-7313-4da0-9df6-1ce849d1f029 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:28.76187151 +0000 UTC m=+1567.994861950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" (UID: "d0837a20-7313-4da0-9df6-1ce849d1f029") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 25 00:35:28 crc kubenswrapper[4947]: I0125 00:35:28.090578 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"4d97dfbde1af11c56f34ab0d2caa8202894d0c76cee6f888619471b880c92853"} Jan 25 00:35:28 crc kubenswrapper[4947]: I0125 00:35:28.777875 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:28 crc kubenswrapper[4947]: I0125 00:35:28.782819 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/d0837a20-7313-4da0-9df6-1ce849d1f029-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n\" (UID: \"d0837a20-7313-4da0-9df6-1ce849d1f029\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:28 crc kubenswrapper[4947]: I0125 00:35:28.837733 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" Jan 25 00:35:30 crc kubenswrapper[4947]: I0125 00:35:30.089906 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:35:30 crc kubenswrapper[4947]: E0125 00:35:30.090652 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.646554 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm"] Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.648415 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.654785 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.655171 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.656713 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm"] Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.834387 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m97zx\" (UniqueName: \"kubernetes.io/projected/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-kube-api-access-m97zx\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.834468 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.834492 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.834513 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.834565 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.935969 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.936019 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.936041 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.936064 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.936116 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m97zx\" (UniqueName: \"kubernetes.io/projected/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-kube-api-access-m97zx\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: E0125 00:35:31.936542 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 25 00:35:31 crc kubenswrapper[4947]: E0125 00:35:31.936591 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls podName:5ef6c1dd-abb0-4c2f-8aa5-13614c09e445 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:32.436575636 +0000 UTC m=+1571.669566076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" (UID: "5ef6c1dd-abb0-4c2f-8aa5-13614c09e445") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.937565 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.938590 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.953272 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:31 crc kubenswrapper[4947]: I0125 00:35:31.956286 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m97zx\" (UniqueName: \"kubernetes.io/projected/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-kube-api-access-m97zx\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:32 crc kubenswrapper[4947]: I0125 00:35:32.441824 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:32 crc kubenswrapper[4947]: E0125 00:35:32.441971 4947 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 25 00:35:32 crc kubenswrapper[4947]: E0125 00:35:32.442066 4947 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls podName:5ef6c1dd-abb0-4c2f-8aa5-13614c09e445 nodeName:}" failed. No retries permitted until 2026-01-25 00:35:33.442042105 +0000 UTC m=+1572.675032565 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" (UID: "5ef6c1dd-abb0-4c2f-8aa5-13614c09e445") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 25 00:35:32 crc kubenswrapper[4947]: I0125 00:35:32.935517 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n"] Jan 25 00:35:33 crc kubenswrapper[4947]: I0125 00:35:33.457228 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:33 crc kubenswrapper[4947]: I0125 00:35:33.495195 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/5ef6c1dd-abb0-4c2f-8aa5-13614c09e445-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm\" (UID: \"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:33 crc kubenswrapper[4947]: I0125 00:35:33.781475 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.143690 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"3d82b42a-6236-43af-8190-d28e96b2b933","Type":"ContainerStarted","Data":"89afa61e3914ce8cb8693340cc5cc3c32db2918704ce204fe9d11a8dd7035a58"} Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.145933 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"c8fb5908b4d1240858bee40840ae994e569cc4d5d414f2d1c285dab39bb3029c"} Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.148201 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerStarted","Data":"dd02393a1ef9e451c3aa4e14f76831f95aa211851c0221f15c2b4d3e3616aa30"} Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.149409 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"cfa6a1ad3e6bc390bd231b034916d7e739a7eb9412b6be67c10036a98ea03803"} Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.167902 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.443256582 podStartE2EDuration="38.167885162s" podCreationTimestamp="2026-01-25 00:34:56 +0000 UTC" firstStartedPulling="2026-01-25 00:34:59.788901445 +0000 UTC m=+1539.021891905" lastFinishedPulling="2026-01-25 00:35:33.513530045 +0000 UTC m=+1572.746520485" observedRunningTime="2026-01-25 00:35:34.164526845 +0000 UTC m=+1573.397517285" watchObservedRunningTime="2026-01-25 00:35:34.167885162 +0000 UTC m=+1573.400875602" Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.191987 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm"] Jan 25 00:35:34 crc kubenswrapper[4947]: I0125 00:35:34.296797 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Jan 25 00:35:35 crc kubenswrapper[4947]: I0125 00:35:35.158732 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"63f51f9bec4252effca2b75fa251bd57f899aa6aceb9e547c853c07a53ddd559"} Jan 25 00:35:35 crc kubenswrapper[4947]: I0125 00:35:35.160235 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"9e73e5a0278310d93f36aa55d1882513dae576a683ba398eb59f9c836ffed722"} Jan 25 00:35:35 crc kubenswrapper[4947]: I0125 00:35:35.161997 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"cfcf79fed5bba223ce07589542b769fe8fff87686f83fd8454f58b1e7ee7574b"} Jan 25 00:35:35 crc kubenswrapper[4947]: I0125 00:35:35.164867 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"31bf172e06d3d6ed5dd0c2957898aced9ccadd1061a4b014252d4a8765250880"} Jan 25 00:35:35 crc kubenswrapper[4947]: I0125 00:35:35.164901 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"51738f9e40b34f8bc1cf8fa8e4a44db2419a20877eb84aefbcbdad9edda367f6"} Jan 25 00:35:36 crc kubenswrapper[4947]: I0125 00:35:36.178515 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"952628d963a6ca1dc30326049a85e581da8b18963ce22aa7b39a33466cdae78a"} Jan 25 00:35:36 crc kubenswrapper[4947]: I0125 00:35:36.183690 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerStarted","Data":"778a0a35697b5c4b629853bec5fd9b728a61b9623a0bdf8cb70b8a05bd1f8bad"} Jan 25 00:35:36 crc kubenswrapper[4947]: I0125 00:35:36.184066 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"4888dfdb-3780-4d4b-ad3a-4c1238a72464","Type":"ContainerStarted","Data":"66b8ebe4fed73736994a2bf82364afc019d0f54c3066ac67438ca706e938bcb6"} Jan 25 00:35:36 crc kubenswrapper[4947]: I0125 00:35:36.223972 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=17.430590389 podStartE2EDuration="26.223946743s" podCreationTimestamp="2026-01-25 00:35:10 +0000 UTC" firstStartedPulling="2026-01-25 00:35:27.084400599 +0000 UTC m=+1566.317391049" lastFinishedPulling="2026-01-25 00:35:35.877756963 +0000 UTC m=+1575.110747403" observedRunningTime="2026-01-25 00:35:36.206341353 +0000 UTC m=+1575.439331813" watchObservedRunningTime="2026-01-25 00:35:36.223946743 +0000 UTC m=+1575.456937183" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.515747 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f"] Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.518012 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.519957 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.521084 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.528185 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f"] Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.667220 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/9b3c0215-a9e0-45e1-a844-c93fd70138c9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.667279 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9b3c0215-a9e0-45e1-a844-c93fd70138c9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.667322 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84lnh\" (UniqueName: \"kubernetes.io/projected/9b3c0215-a9e0-45e1-a844-c93fd70138c9-kube-api-access-84lnh\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.667384 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9b3c0215-a9e0-45e1-a844-c93fd70138c9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.769384 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84lnh\" (UniqueName: \"kubernetes.io/projected/9b3c0215-a9e0-45e1-a844-c93fd70138c9-kube-api-access-84lnh\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.770990 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9b3c0215-a9e0-45e1-a844-c93fd70138c9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.771096 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/9b3c0215-a9e0-45e1-a844-c93fd70138c9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.771154 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9b3c0215-a9e0-45e1-a844-c93fd70138c9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.772306 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9b3c0215-a9e0-45e1-a844-c93fd70138c9-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.774028 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9b3c0215-a9e0-45e1-a844-c93fd70138c9-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.779154 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/9b3c0215-a9e0-45e1-a844-c93fd70138c9-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.791995 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84lnh\" (UniqueName: \"kubernetes.io/projected/9b3c0215-a9e0-45e1-a844-c93fd70138c9-kube-api-access-84lnh\") pod \"default-cloud1-coll-event-smartgateway-9c5498458-pjx7f\" (UID: \"9b3c0215-a9e0-45e1-a844-c93fd70138c9\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:39 crc kubenswrapper[4947]: I0125 00:35:39.846006 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.693061 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c"] Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.694425 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.696669 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.713712 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c"] Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.791731 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b022a945-2af3-4275-bc4b-5db0790be691-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.792096 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b022a945-2af3-4275-bc4b-5db0790be691-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.792156 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckbm7\" (UniqueName: \"kubernetes.io/projected/b022a945-2af3-4275-bc4b-5db0790be691-kube-api-access-ckbm7\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.792187 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b022a945-2af3-4275-bc4b-5db0790be691-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.893830 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckbm7\" (UniqueName: \"kubernetes.io/projected/b022a945-2af3-4275-bc4b-5db0790be691-kube-api-access-ckbm7\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.893891 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b022a945-2af3-4275-bc4b-5db0790be691-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.893930 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b022a945-2af3-4275-bc4b-5db0790be691-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.893979 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b022a945-2af3-4275-bc4b-5db0790be691-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.895822 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/b022a945-2af3-4275-bc4b-5db0790be691-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.899920 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/b022a945-2af3-4275-bc4b-5db0790be691-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.901410 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/b022a945-2af3-4275-bc4b-5db0790be691-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.906959 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f"] Jan 25 00:35:40 crc kubenswrapper[4947]: I0125 00:35:40.910739 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckbm7\" (UniqueName: \"kubernetes.io/projected/b022a945-2af3-4275-bc4b-5db0790be691-kube-api-access-ckbm7\") pod \"default-cloud1-ceil-event-smartgateway-899c7f46d-5982c\" (UID: \"b022a945-2af3-4275-bc4b-5db0790be691\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.008634 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.253293 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerStarted","Data":"d9ce820614e5d7a763129eac969f4e1c9c86f14f7a6adc4b28a0257f580b10cd"} Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.253342 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerStarted","Data":"760ac999d60a248748a1ee5dbc898b19e76b9e0dd5eb790f71442581c0b7c206"} Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.261436 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"c37b71c00d0cd5a7f621e61de65d3ac4b0d98cefac0f8d02ed6117499871f294"} Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.268000 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"6c3ec49fb78816b957de774b43eca859b84afab9bc1c17ce5ccbcdbdc9629800"} Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.281983 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"d5c02e9b61650bb2174f7eed8f48b8bea2b5004263ebc20f70626bd94c40d2b1"} Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.284370 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" podStartSLOduration=3.769903102 podStartE2EDuration="10.284350156s" podCreationTimestamp="2026-01-25 00:35:31 +0000 UTC" firstStartedPulling="2026-01-25 00:35:34.203883422 +0000 UTC m=+1573.436873862" lastFinishedPulling="2026-01-25 00:35:40.718330476 +0000 UTC m=+1579.951320916" observedRunningTime="2026-01-25 00:35:41.283920215 +0000 UTC m=+1580.516910655" watchObservedRunningTime="2026-01-25 00:35:41.284350156 +0000 UTC m=+1580.517340596" Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.360849 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" podStartSLOduration=3.773651158 podStartE2EDuration="17.360828693s" podCreationTimestamp="2026-01-25 00:35:24 +0000 UTC" firstStartedPulling="2026-01-25 00:35:27.187345628 +0000 UTC m=+1566.420336068" lastFinishedPulling="2026-01-25 00:35:40.774523163 +0000 UTC m=+1580.007513603" observedRunningTime="2026-01-25 00:35:41.359388536 +0000 UTC m=+1580.592378986" watchObservedRunningTime="2026-01-25 00:35:41.360828693 +0000 UTC m=+1580.593819133" Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.364273 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" podStartSLOduration=7.841365465 podStartE2EDuration="15.364254012s" podCreationTimestamp="2026-01-25 00:35:26 +0000 UTC" firstStartedPulling="2026-01-25 00:35:33.323095692 +0000 UTC m=+1572.556086132" lastFinishedPulling="2026-01-25 00:35:40.845984239 +0000 UTC m=+1580.078974679" observedRunningTime="2026-01-25 00:35:41.324468854 +0000 UTC m=+1580.557459294" watchObservedRunningTime="2026-01-25 00:35:41.364254012 +0000 UTC m=+1580.597244452" Jan 25 00:35:41 crc kubenswrapper[4947]: I0125 00:35:41.550818 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c"] Jan 25 00:35:41 crc kubenswrapper[4947]: W0125 00:35:41.553785 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb022a945_2af3_4275_bc4b_5db0790be691.slice/crio-f5ca8876c4d893e65d935a3e684f23d91ef308c1f393325fc7652bb85d3a7133 WatchSource:0}: Error finding container f5ca8876c4d893e65d935a3e684f23d91ef308c1f393325fc7652bb85d3a7133: Status 404 returned error can't find the container with id f5ca8876c4d893e65d935a3e684f23d91ef308c1f393325fc7652bb85d3a7133 Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.290541 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerStarted","Data":"07e470f4131ce01a4bea2c3222308a0262feb230fea4756d471f02edaafa3cff"} Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.292115 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerStarted","Data":"0c2775766e7822dcc8fc40833d9e3a86b971caee2c80f169a3c9941c2462eb7b"} Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.292278 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerStarted","Data":"f5ca8876c4d893e65d935a3e684f23d91ef308c1f393325fc7652bb85d3a7133"} Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.292941 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerStarted","Data":"09b2ec1f4cfd62ffdba3f025deddd8d3b5466b10c3908d282236b56898c225d4"} Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.311784 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" podStartSLOduration=2.024195855 podStartE2EDuration="2.311766595s" podCreationTimestamp="2026-01-25 00:35:40 +0000 UTC" firstStartedPulling="2026-01-25 00:35:41.557474398 +0000 UTC m=+1580.790464838" lastFinishedPulling="2026-01-25 00:35:41.845045138 +0000 UTC m=+1581.078035578" observedRunningTime="2026-01-25 00:35:42.30543732 +0000 UTC m=+1581.538427770" watchObservedRunningTime="2026-01-25 00:35:42.311766595 +0000 UTC m=+1581.544757035" Jan 25 00:35:42 crc kubenswrapper[4947]: I0125 00:35:42.324601 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" podStartSLOduration=2.924874903 podStartE2EDuration="3.32458053s" podCreationTimestamp="2026-01-25 00:35:39 +0000 UTC" firstStartedPulling="2026-01-25 00:35:40.920868445 +0000 UTC m=+1580.153858885" lastFinishedPulling="2026-01-25 00:35:41.320574072 +0000 UTC m=+1580.553564512" observedRunningTime="2026-01-25 00:35:42.323842921 +0000 UTC m=+1581.556833381" watchObservedRunningTime="2026-01-25 00:35:42.32458053 +0000 UTC m=+1581.557570970" Jan 25 00:35:43 crc kubenswrapper[4947]: I0125 00:35:43.089674 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:35:43 crc kubenswrapper[4947]: E0125 00:35:43.089895 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:35:44 crc kubenswrapper[4947]: I0125 00:35:44.296554 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 25 00:35:44 crc kubenswrapper[4947]: I0125 00:35:44.347709 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 25 00:35:44 crc kubenswrapper[4947]: I0125 00:35:44.392739 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.167245 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.168093 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" podUID="7b062645-435c-4461-93cd-8cbe7cd8e733" containerName="default-interconnect" containerID="cri-o://99188ef6337d790fc8a682f69dd76728f1c088acbf4f5174dab2cdba95af3406" gracePeriod=30 Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.366775 4947 generic.go:334] "Generic (PLEG): container finished" podID="9b3c0215-a9e0-45e1-a844-c93fd70138c9" containerID="d9ce820614e5d7a763129eac969f4e1c9c86f14f7a6adc4b28a0257f580b10cd" exitCode=0 Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.366857 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerDied","Data":"d9ce820614e5d7a763129eac969f4e1c9c86f14f7a6adc4b28a0257f580b10cd"} Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.367504 4947 scope.go:117] "RemoveContainer" containerID="d9ce820614e5d7a763129eac969f4e1c9c86f14f7a6adc4b28a0257f580b10cd" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.370037 4947 generic.go:334] "Generic (PLEG): container finished" podID="5ef6c1dd-abb0-4c2f-8aa5-13614c09e445" containerID="952628d963a6ca1dc30326049a85e581da8b18963ce22aa7b39a33466cdae78a" exitCode=0 Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.370112 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerDied","Data":"952628d963a6ca1dc30326049a85e581da8b18963ce22aa7b39a33466cdae78a"} Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.372690 4947 scope.go:117] "RemoveContainer" containerID="952628d963a6ca1dc30326049a85e581da8b18963ce22aa7b39a33466cdae78a" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.384813 4947 generic.go:334] "Generic (PLEG): container finished" podID="7b062645-435c-4461-93cd-8cbe7cd8e733" containerID="99188ef6337d790fc8a682f69dd76728f1c088acbf4f5174dab2cdba95af3406" exitCode=0 Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.384890 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" event={"ID":"7b062645-435c-4461-93cd-8cbe7cd8e733","Type":"ContainerDied","Data":"99188ef6337d790fc8a682f69dd76728f1c088acbf4f5174dab2cdba95af3406"} Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.630544 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770145 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770212 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770248 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770324 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770355 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770386 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.770410 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vf7l\" (UniqueName: \"kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l\") pod \"7b062645-435c-4461-93cd-8cbe7cd8e733\" (UID: \"7b062645-435c-4461-93cd-8cbe7cd8e733\") " Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.771462 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.775162 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.781423 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.781430 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l" (OuterVolumeSpecName: "kube-api-access-5vf7l") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "kube-api-access-5vf7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.781481 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.783514 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.791235 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "7b062645-435c-4461-93cd-8cbe7cd8e733" (UID: "7b062645-435c-4461-93cd-8cbe7cd8e733"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872339 4947 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872386 4947 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872406 4947 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872423 4947 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872436 4947 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/7b062645-435c-4461-93cd-8cbe7cd8e733-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872448 4947 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/7b062645-435c-4461-93cd-8cbe7cd8e733-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:52 crc kubenswrapper[4947]: I0125 00:35:52.872459 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vf7l\" (UniqueName: \"kubernetes.io/projected/7b062645-435c-4461-93cd-8cbe7cd8e733-kube-api-access-5vf7l\") on node \"crc\" DevicePath \"\"" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.391819 4947 generic.go:334] "Generic (PLEG): container finished" podID="b022a945-2af3-4275-bc4b-5db0790be691" containerID="0c2775766e7822dcc8fc40833d9e3a86b971caee2c80f169a3c9941c2462eb7b" exitCode=0 Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.391876 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerDied","Data":"0c2775766e7822dcc8fc40833d9e3a86b971caee2c80f169a3c9941c2462eb7b"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.392351 4947 scope.go:117] "RemoveContainer" containerID="0c2775766e7822dcc8fc40833d9e3a86b971caee2c80f169a3c9941c2462eb7b" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.397238 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerStarted","Data":"337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.400029 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"bf6ac4b92c89fa1eb2bbe3998935500a69af02ce449b2f439c10729177848fce"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.402523 4947 generic.go:334] "Generic (PLEG): container finished" podID="0d98fa0e-0a1b-4139-b32a-dbc771dc0939" containerID="cfcf79fed5bba223ce07589542b769fe8fff87686f83fd8454f58b1e7ee7574b" exitCode=0 Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.402593 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerDied","Data":"cfcf79fed5bba223ce07589542b769fe8fff87686f83fd8454f58b1e7ee7574b"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.403283 4947 scope.go:117] "RemoveContainer" containerID="cfcf79fed5bba223ce07589542b769fe8fff87686f83fd8454f58b1e7ee7574b" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.405532 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" event={"ID":"7b062645-435c-4461-93cd-8cbe7cd8e733","Type":"ContainerDied","Data":"1e919ed0615f4f4827a60cadd255f60dc5f4870e6c12a5fe6e5916224ada10bc"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.405581 4947 scope.go:117] "RemoveContainer" containerID="99188ef6337d790fc8a682f69dd76728f1c088acbf4f5174dab2cdba95af3406" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.405669 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pvgb8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.412905 4947 generic.go:334] "Generic (PLEG): container finished" podID="d0837a20-7313-4da0-9df6-1ce849d1f029" containerID="31bf172e06d3d6ed5dd0c2957898aced9ccadd1061a4b014252d4a8765250880" exitCode=0 Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.412959 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerDied","Data":"31bf172e06d3d6ed5dd0c2957898aced9ccadd1061a4b014252d4a8765250880"} Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.413389 4947 scope.go:117] "RemoveContainer" containerID="31bf172e06d3d6ed5dd0c2957898aced9ccadd1061a4b014252d4a8765250880" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.538239 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.546587 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pvgb8"] Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.669404 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-qb5z8"] Jan 25 00:35:53 crc kubenswrapper[4947]: E0125 00:35:53.670026 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b062645-435c-4461-93cd-8cbe7cd8e733" containerName="default-interconnect" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.670092 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b062645-435c-4461-93cd-8cbe7cd8e733" containerName="default-interconnect" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.670274 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b062645-435c-4461-93cd-8cbe7cd8e733" containerName="default-interconnect" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.670696 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.674683 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.674946 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.675117 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.675259 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-ktwfn" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.675307 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.675272 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.675497 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.684359 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-qb5z8"] Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790813 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-config\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790863 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790919 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-users\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790947 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790968 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.790986 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5nrl\" (UniqueName: \"kubernetes.io/projected/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-kube-api-access-x5nrl\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.791007 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: E0125 00:35:53.881464 4947 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b3c0215_a9e0_45e1_a844_c93fd70138c9.slice/crio-337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5.scope\": RecentStats: unable to find data in memory cache]" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892449 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-config\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892514 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892573 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-users\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892601 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892625 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.892648 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5nrl\" (UniqueName: \"kubernetes.io/projected/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-kube-api-access-x5nrl\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.893288 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.893341 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-config\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.901320 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-sasl-users\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.901759 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.903405 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.903882 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.908975 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:53 crc kubenswrapper[4947]: I0125 00:35:53.914532 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5nrl\" (UniqueName: \"kubernetes.io/projected/53107577-1f0a-4c8d-b5e5-81e4d415f3a1-kube-api-access-x5nrl\") pod \"default-interconnect-68864d46cb-qb5z8\" (UID: \"53107577-1f0a-4c8d-b5e5-81e4d415f3a1\") " pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.052040 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.089939 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:35:54 crc kubenswrapper[4947]: E0125 00:35:54.090209 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.366750 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.367631 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.370752 4947 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.375294 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.381664 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.447255 4947 generic.go:334] "Generic (PLEG): container finished" podID="5ef6c1dd-abb0-4c2f-8aa5-13614c09e445" containerID="bf6ac4b92c89fa1eb2bbe3998935500a69af02ce449b2f439c10729177848fce" exitCode=0 Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.447321 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerDied","Data":"bf6ac4b92c89fa1eb2bbe3998935500a69af02ce449b2f439c10729177848fce"} Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.447354 4947 scope.go:117] "RemoveContainer" containerID="952628d963a6ca1dc30326049a85e581da8b18963ce22aa7b39a33466cdae78a" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.447881 4947 scope.go:117] "RemoveContainer" containerID="bf6ac4b92c89fa1eb2bbe3998935500a69af02ce449b2f439c10729177848fce" Jan 25 00:35:54 crc kubenswrapper[4947]: E0125 00:35:54.448244 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm_service-telemetry(5ef6c1dd-abb0-4c2f-8aa5-13614c09e445)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" podUID="5ef6c1dd-abb0-4c2f-8aa5-13614c09e445" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.472250 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"35e16116d28f7d36faafc8c370c2b58fc605be75020a2109beeae992364e6c13"} Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.483354 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"8a2e587071b057abdd25da40e9803f24478fc5cb9081aa1437831aad84949e07"} Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.505314 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/1e239640-20ad-42e0-8db4-0ada55b1274c-qdr-test-config\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.505376 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/1e239640-20ad-42e0-8db4-0ada55b1274c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.505421 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rhdt\" (UniqueName: \"kubernetes.io/projected/1e239640-20ad-42e0-8db4-0ada55b1274c-kube-api-access-6rhdt\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.510174 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerStarted","Data":"aeea95242d810e307e8bdf31d6bb90ca0e574420d839fb3d8a0912352dc00ac9"} Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.530463 4947 generic.go:334] "Generic (PLEG): container finished" podID="9b3c0215-a9e0-45e1-a844-c93fd70138c9" containerID="337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5" exitCode=0 Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.530507 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerDied","Data":"337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5"} Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.530539 4947 scope.go:117] "RemoveContainer" containerID="d9ce820614e5d7a763129eac969f4e1c9c86f14f7a6adc4b28a0257f580b10cd" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.531084 4947 scope.go:117] "RemoveContainer" containerID="337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5" Jan 25 00:35:54 crc kubenswrapper[4947]: E0125 00:35:54.531463 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-9c5498458-pjx7f_service-telemetry(9b3c0215-a9e0-45e1-a844-c93fd70138c9)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" podUID="9b3c0215-a9e0-45e1-a844-c93fd70138c9" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.568904 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-qb5z8"] Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.606515 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rhdt\" (UniqueName: \"kubernetes.io/projected/1e239640-20ad-42e0-8db4-0ada55b1274c-kube-api-access-6rhdt\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.607001 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/1e239640-20ad-42e0-8db4-0ada55b1274c-qdr-test-config\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.609554 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/1e239640-20ad-42e0-8db4-0ada55b1274c-qdr-test-config\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.607103 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/1e239640-20ad-42e0-8db4-0ada55b1274c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.625151 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/1e239640-20ad-42e0-8db4-0ada55b1274c-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.638960 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rhdt\" (UniqueName: \"kubernetes.io/projected/1e239640-20ad-42e0-8db4-0ada55b1274c-kube-api-access-6rhdt\") pod \"qdr-test\" (UID: \"1e239640-20ad-42e0-8db4-0ada55b1274c\") " pod="service-telemetry/qdr-test" Jan 25 00:35:54 crc kubenswrapper[4947]: I0125 00:35:54.683462 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.098692 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b062645-435c-4461-93cd-8cbe7cd8e733" path="/var/lib/kubelet/pods/7b062645-435c-4461-93cd-8cbe7cd8e733/volumes" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.136960 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 25 00:35:55 crc kubenswrapper[4947]: W0125 00:35:55.142455 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e239640_20ad_42e0_8db4_0ada55b1274c.slice/crio-a0910e5e4d03284a7f756a3f4b820f03ba302d65fc240c6f9c42739539e4fd9c WatchSource:0}: Error finding container a0910e5e4d03284a7f756a3f4b820f03ba302d65fc240c6f9c42739539e4fd9c: Status 404 returned error can't find the container with id a0910e5e4d03284a7f756a3f4b820f03ba302d65fc240c6f9c42739539e4fd9c Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.541667 4947 generic.go:334] "Generic (PLEG): container finished" podID="0d98fa0e-0a1b-4139-b32a-dbc771dc0939" containerID="35e16116d28f7d36faafc8c370c2b58fc605be75020a2109beeae992364e6c13" exitCode=0 Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.541721 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerDied","Data":"35e16116d28f7d36faafc8c370c2b58fc605be75020a2109beeae992364e6c13"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.541750 4947 scope.go:117] "RemoveContainer" containerID="cfcf79fed5bba223ce07589542b769fe8fff87686f83fd8454f58b1e7ee7574b" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.542328 4947 scope.go:117] "RemoveContainer" containerID="35e16116d28f7d36faafc8c370c2b58fc605be75020a2109beeae992364e6c13" Jan 25 00:35:55 crc kubenswrapper[4947]: E0125 00:35:55.542541 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf_service-telemetry(0d98fa0e-0a1b-4139-b32a-dbc771dc0939)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" podUID="0d98fa0e-0a1b-4139-b32a-dbc771dc0939" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.546426 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"1e239640-20ad-42e0-8db4-0ada55b1274c","Type":"ContainerStarted","Data":"a0910e5e4d03284a7f756a3f4b820f03ba302d65fc240c6f9c42739539e4fd9c"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.549412 4947 generic.go:334] "Generic (PLEG): container finished" podID="d0837a20-7313-4da0-9df6-1ce849d1f029" containerID="8a2e587071b057abdd25da40e9803f24478fc5cb9081aa1437831aad84949e07" exitCode=0 Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.549472 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerDied","Data":"8a2e587071b057abdd25da40e9803f24478fc5cb9081aa1437831aad84949e07"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.549778 4947 scope.go:117] "RemoveContainer" containerID="8a2e587071b057abdd25da40e9803f24478fc5cb9081aa1437831aad84949e07" Jan 25 00:35:55 crc kubenswrapper[4947]: E0125 00:35:55.549944 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n_service-telemetry(d0837a20-7313-4da0-9df6-1ce849d1f029)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" podUID="d0837a20-7313-4da0-9df6-1ce849d1f029" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.551231 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" event={"ID":"53107577-1f0a-4c8d-b5e5-81e4d415f3a1","Type":"ContainerStarted","Data":"ccf4c16e8392e9c70c4063788fd88c5fefe69cdef47d6796fde049be46c08036"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.551251 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" event={"ID":"53107577-1f0a-4c8d-b5e5-81e4d415f3a1","Type":"ContainerStarted","Data":"9ef20d242c2ad9ed496929283d7ff693cd5bf96f1e3d7716bc2f45a0ca67d0ac"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.553316 4947 generic.go:334] "Generic (PLEG): container finished" podID="b022a945-2af3-4275-bc4b-5db0790be691" containerID="aeea95242d810e307e8bdf31d6bb90ca0e574420d839fb3d8a0912352dc00ac9" exitCode=0 Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.553338 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerDied","Data":"aeea95242d810e307e8bdf31d6bb90ca0e574420d839fb3d8a0912352dc00ac9"} Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.553588 4947 scope.go:117] "RemoveContainer" containerID="aeea95242d810e307e8bdf31d6bb90ca0e574420d839fb3d8a0912352dc00ac9" Jan 25 00:35:55 crc kubenswrapper[4947]: E0125 00:35:55.553789 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-899c7f46d-5982c_service-telemetry(b022a945-2af3-4275-bc4b-5db0790be691)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" podUID="b022a945-2af3-4275-bc4b-5db0790be691" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.596629 4947 scope.go:117] "RemoveContainer" containerID="31bf172e06d3d6ed5dd0c2957898aced9ccadd1061a4b014252d4a8765250880" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.615642 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-qb5z8" podStartSLOduration=3.615622352 podStartE2EDuration="3.615622352s" podCreationTimestamp="2026-01-25 00:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-25 00:35:55.613842345 +0000 UTC m=+1594.846832785" watchObservedRunningTime="2026-01-25 00:35:55.615622352 +0000 UTC m=+1594.848612792" Jan 25 00:35:55 crc kubenswrapper[4947]: I0125 00:35:55.643186 4947 scope.go:117] "RemoveContainer" containerID="0c2775766e7822dcc8fc40833d9e3a86b971caee2c80f169a3c9941c2462eb7b" Jan 25 00:36:05 crc kubenswrapper[4947]: I0125 00:36:05.090252 4947 scope.go:117] "RemoveContainer" containerID="bf6ac4b92c89fa1eb2bbe3998935500a69af02ce449b2f439c10729177848fce" Jan 25 00:36:05 crc kubenswrapper[4947]: I0125 00:36:05.090915 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:36:05 crc kubenswrapper[4947]: E0125 00:36:05.091297 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:36:06 crc kubenswrapper[4947]: E0125 00:36:06.971750 4947 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/tripleowallabycentos9/openstack-qdrouterd:current-tripleo" Jan 25 00:36:06 crc kubenswrapper[4947]: E0125 00:36:06.972237 4947 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:qdr,Image:quay.io/tripleowallabycentos9/openstack-qdrouterd:current-tripleo,Command:[/usr/sbin/qdrouterd -c /etc/qpid-dispatch/qdrouterd.conf],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:amqp,HostPort:0,ContainerPort:5672,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-interconnect-selfsigned-cert,ReadOnly:false,MountPath:/etc/pki/tls/certs/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:qdr-test-config,ReadOnly:false,MountPath:/etc/qpid-dispatch/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rhdt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod qdr-test_service-telemetry(1e239640-20ad-42e0-8db4-0ada55b1274c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 25 00:36:06 crc kubenswrapper[4947]: E0125 00:36:06.973434 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"qdr\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/qdr-test" podUID="1e239640-20ad-42e0-8db4-0ada55b1274c" Jan 25 00:36:07 crc kubenswrapper[4947]: I0125 00:36:07.090293 4947 scope.go:117] "RemoveContainer" containerID="35e16116d28f7d36faafc8c370c2b58fc605be75020a2109beeae992364e6c13" Jan 25 00:36:07 crc kubenswrapper[4947]: I0125 00:36:07.090440 4947 scope.go:117] "RemoveContainer" containerID="8a2e587071b057abdd25da40e9803f24478fc5cb9081aa1437831aad84949e07" Jan 25 00:36:07 crc kubenswrapper[4947]: I0125 00:36:07.644527 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n" event={"ID":"d0837a20-7313-4da0-9df6-1ce849d1f029","Type":"ContainerStarted","Data":"a9b1927d41a5bc517fbc0649f1a336e77ed866ebec74337a5668d2eb7759755d"} Jan 25 00:36:07 crc kubenswrapper[4947]: I0125 00:36:07.650283 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm" event={"ID":"5ef6c1dd-abb0-4c2f-8aa5-13614c09e445","Type":"ContainerStarted","Data":"5089b3884e5aabf6594b10df4b26d4986df2459c4fae97c1f632658a206cf0d0"} Jan 25 00:36:07 crc kubenswrapper[4947]: I0125 00:36:07.655022 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf" event={"ID":"0d98fa0e-0a1b-4139-b32a-dbc771dc0939","Type":"ContainerStarted","Data":"d47a05edfe52f289259cb07f361d7d752bdb8172bb99efc2758000290702c4a9"} Jan 25 00:36:07 crc kubenswrapper[4947]: E0125 00:36:07.656371 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"qdr\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tripleowallabycentos9/openstack-qdrouterd:current-tripleo\\\"\"" pod="service-telemetry/qdr-test" podUID="1e239640-20ad-42e0-8db4-0ada55b1274c" Jan 25 00:36:09 crc kubenswrapper[4947]: I0125 00:36:09.089398 4947 scope.go:117] "RemoveContainer" containerID="337e296ba4309a43005154c28f2f3bc7aaf976138af72dd9cd336f268755d3b5" Jan 25 00:36:10 crc kubenswrapper[4947]: I0125 00:36:10.090328 4947 scope.go:117] "RemoveContainer" containerID="aeea95242d810e307e8bdf31d6bb90ca0e574420d839fb3d8a0912352dc00ac9" Jan 25 00:36:10 crc kubenswrapper[4947]: I0125 00:36:10.689100 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-9c5498458-pjx7f" event={"ID":"9b3c0215-a9e0-45e1-a844-c93fd70138c9","Type":"ContainerStarted","Data":"e6faf1d91fbec62cbd236f7b59d347475195c4098f4f949b15e10ea150c42b3b"} Jan 25 00:36:10 crc kubenswrapper[4947]: I0125 00:36:10.693515 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-899c7f46d-5982c" event={"ID":"b022a945-2af3-4275-bc4b-5db0790be691","Type":"ContainerStarted","Data":"f7a5668c5ce3d4fd9817304a4fde7c2cf72e81822d6545d9eac7237029cc1f88"} Jan 25 00:36:19 crc kubenswrapper[4947]: I0125 00:36:19.092662 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:36:19 crc kubenswrapper[4947]: E0125 00:36:19.093345 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:36:20 crc kubenswrapper[4947]: I0125 00:36:20.771898 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"1e239640-20ad-42e0-8db4-0ada55b1274c","Type":"ContainerStarted","Data":"38fc9634cbadc9e242f4d8e8ac0e02dc25fd742c8f1b2a2564426055c35b2c34"} Jan 25 00:36:20 crc kubenswrapper[4947]: I0125 00:36:20.788239 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.6112226060000001 podStartE2EDuration="26.788214608s" podCreationTimestamp="2026-01-25 00:35:54 +0000 UTC" firstStartedPulling="2026-01-25 00:35:55.144117549 +0000 UTC m=+1594.377107989" lastFinishedPulling="2026-01-25 00:36:20.321109551 +0000 UTC m=+1619.554099991" observedRunningTime="2026-01-25 00:36:20.783410023 +0000 UTC m=+1620.016400463" watchObservedRunningTime="2026-01-25 00:36:20.788214608 +0000 UTC m=+1620.021205048" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.175756 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7ffht"] Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.176772 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.179669 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.179978 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.180445 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.181540 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.181781 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.189554 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.197332 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7ffht"] Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322615 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n95pk\" (UniqueName: \"kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322690 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322774 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322820 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322855 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.322901 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.323012 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.424612 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.424755 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.424885 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.424972 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.425052 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n95pk\" (UniqueName: \"kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.425113 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.425188 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.426383 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.426604 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.427677 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.427888 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.428787 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.428968 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.453902 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n95pk\" (UniqueName: \"kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk\") pod \"stf-smoketest-smoke1-7ffht\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.496866 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.615159 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.620919 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.636431 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.733205 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfnhv\" (UniqueName: \"kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv\") pod \"curl\" (UID: \"efbf27b1-75f2-42e1-87b0-d9a6553993eb\") " pod="service-telemetry/curl" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.835505 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfnhv\" (UniqueName: \"kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv\") pod \"curl\" (UID: \"efbf27b1-75f2-42e1-87b0-d9a6553993eb\") " pod="service-telemetry/curl" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.857817 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfnhv\" (UniqueName: \"kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv\") pod \"curl\" (UID: \"efbf27b1-75f2-42e1-87b0-d9a6553993eb\") " pod="service-telemetry/curl" Jan 25 00:36:21 crc kubenswrapper[4947]: I0125 00:36:21.968662 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 25 00:36:22 crc kubenswrapper[4947]: I0125 00:36:22.015571 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7ffht"] Jan 25 00:36:22 crc kubenswrapper[4947]: W0125 00:36:22.022932 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6c83c5f_2bee_41a9_8433_391c8e71812b.slice/crio-b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92 WatchSource:0}: Error finding container b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92: Status 404 returned error can't find the container with id b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92 Jan 25 00:36:22 crc kubenswrapper[4947]: I0125 00:36:22.290789 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 25 00:36:22 crc kubenswrapper[4947]: I0125 00:36:22.792328 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerStarted","Data":"b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92"} Jan 25 00:36:22 crc kubenswrapper[4947]: I0125 00:36:22.793496 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"efbf27b1-75f2-42e1-87b0-d9a6553993eb","Type":"ContainerStarted","Data":"9aeefd58e22e91b203be33a879dd5b664a89c647e26bffc8bcc7b2be35a32d8c"} Jan 25 00:36:24 crc kubenswrapper[4947]: I0125 00:36:24.822636 4947 generic.go:334] "Generic (PLEG): container finished" podID="efbf27b1-75f2-42e1-87b0-d9a6553993eb" containerID="326ef725fcb1dd594f7bbc583c22d8fc4e7e9add78f50e9f61ae71c3ff520d34" exitCode=0 Jan 25 00:36:24 crc kubenswrapper[4947]: I0125 00:36:24.822992 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"efbf27b1-75f2-42e1-87b0-d9a6553993eb","Type":"ContainerDied","Data":"326ef725fcb1dd594f7bbc583c22d8fc4e7e9add78f50e9f61ae71c3ff520d34"} Jan 25 00:36:31 crc kubenswrapper[4947]: I0125 00:36:31.101742 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:36:31 crc kubenswrapper[4947]: E0125 00:36:31.103061 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.264537 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.467058 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfnhv\" (UniqueName: \"kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv\") pod \"efbf27b1-75f2-42e1-87b0-d9a6553993eb\" (UID: \"efbf27b1-75f2-42e1-87b0-d9a6553993eb\") " Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.470531 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_efbf27b1-75f2-42e1-87b0-d9a6553993eb/curl/0.log" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.471313 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv" (OuterVolumeSpecName: "kube-api-access-pfnhv") pod "efbf27b1-75f2-42e1-87b0-d9a6553993eb" (UID: "efbf27b1-75f2-42e1-87b0-d9a6553993eb"). InnerVolumeSpecName "kube-api-access-pfnhv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.568602 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfnhv\" (UniqueName: \"kubernetes.io/projected/efbf27b1-75f2-42e1-87b0-d9a6553993eb-kube-api-access-pfnhv\") on node \"crc\" DevicePath \"\"" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.831884 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-4gkbj_b6cfc9d0-598c-4149-8a38-ec02ced8d2b8/prometheus-webhook-snmp/0.log" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.882784 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerStarted","Data":"db6c1923ad105525adf2b51f7b5d0f146fb16811af782b766667e443c43fec08"} Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.883918 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"efbf27b1-75f2-42e1-87b0-d9a6553993eb","Type":"ContainerDied","Data":"9aeefd58e22e91b203be33a879dd5b664a89c647e26bffc8bcc7b2be35a32d8c"} Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.883948 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9aeefd58e22e91b203be33a879dd5b664a89c647e26bffc8bcc7b2be35a32d8c" Jan 25 00:36:32 crc kubenswrapper[4947]: I0125 00:36:32.884001 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 25 00:36:37 crc kubenswrapper[4947]: I0125 00:36:37.928144 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerStarted","Data":"c440b36ccd49f738e0b6f93f7e1c26d5c11687d6b9bbecd5955862939a30bcbb"} Jan 25 00:36:37 crc kubenswrapper[4947]: I0125 00:36:37.949372 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-7ffht" podStartSLOduration=1.2883502629999999 podStartE2EDuration="16.949354231s" podCreationTimestamp="2026-01-25 00:36:21 +0000 UTC" firstStartedPulling="2026-01-25 00:36:22.025095318 +0000 UTC m=+1621.258085758" lastFinishedPulling="2026-01-25 00:36:37.686099276 +0000 UTC m=+1636.919089726" observedRunningTime="2026-01-25 00:36:37.943663732 +0000 UTC m=+1637.176654172" watchObservedRunningTime="2026-01-25 00:36:37.949354231 +0000 UTC m=+1637.182344671" Jan 25 00:36:45 crc kubenswrapper[4947]: I0125 00:36:45.089630 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:36:45 crc kubenswrapper[4947]: E0125 00:36:45.090508 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:36:58 crc kubenswrapper[4947]: I0125 00:36:58.089934 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:36:58 crc kubenswrapper[4947]: E0125 00:36:58.091002 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:37:03 crc kubenswrapper[4947]: I0125 00:37:03.046404 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-4gkbj_b6cfc9d0-598c-4149-8a38-ec02ced8d2b8/prometheus-webhook-snmp/0.log" Jan 25 00:37:07 crc kubenswrapper[4947]: I0125 00:37:07.216091 4947 generic.go:334] "Generic (PLEG): container finished" podID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerID="db6c1923ad105525adf2b51f7b5d0f146fb16811af782b766667e443c43fec08" exitCode=1 Jan 25 00:37:07 crc kubenswrapper[4947]: I0125 00:37:07.216190 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerDied","Data":"db6c1923ad105525adf2b51f7b5d0f146fb16811af782b766667e443c43fec08"} Jan 25 00:37:07 crc kubenswrapper[4947]: I0125 00:37:07.217412 4947 scope.go:117] "RemoveContainer" containerID="db6c1923ad105525adf2b51f7b5d0f146fb16811af782b766667e443c43fec08" Jan 25 00:37:09 crc kubenswrapper[4947]: I0125 00:37:09.090454 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:37:09 crc kubenswrapper[4947]: E0125 00:37:09.090750 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:37:10 crc kubenswrapper[4947]: I0125 00:37:10.239053 4947 generic.go:334] "Generic (PLEG): container finished" podID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerID="c440b36ccd49f738e0b6f93f7e1c26d5c11687d6b9bbecd5955862939a30bcbb" exitCode=0 Jan 25 00:37:10 crc kubenswrapper[4947]: I0125 00:37:10.239118 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerDied","Data":"c440b36ccd49f738e0b6f93f7e1c26d5c11687d6b9bbecd5955862939a30bcbb"} Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.588666 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.697608 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.697984 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.698022 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.698083 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.698151 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.698181 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.698239 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n95pk\" (UniqueName: \"kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk\") pod \"b6c83c5f-2bee-41a9-8433-391c8e71812b\" (UID: \"b6c83c5f-2bee-41a9-8433-391c8e71812b\") " Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.704750 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk" (OuterVolumeSpecName: "kube-api-access-n95pk") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "kube-api-access-n95pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.715539 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.719800 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.720117 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.721070 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.729408 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.736010 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "b6c83c5f-2bee-41a9-8433-391c8e71812b" (UID: "b6c83c5f-2bee-41a9-8433-391c8e71812b"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808886 4947 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808930 4947 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808943 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n95pk\" (UniqueName: \"kubernetes.io/projected/b6c83c5f-2bee-41a9-8433-391c8e71812b-kube-api-access-n95pk\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808951 4947 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808984 4947 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.808992 4947 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:11 crc kubenswrapper[4947]: I0125 00:37:11.809001 4947 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/b6c83c5f-2bee-41a9-8433-391c8e71812b-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:12 crc kubenswrapper[4947]: I0125 00:37:12.263459 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7ffht" event={"ID":"b6c83c5f-2bee-41a9-8433-391c8e71812b","Type":"ContainerDied","Data":"b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92"} Jan 25 00:37:12 crc kubenswrapper[4947]: I0125 00:37:12.263517 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b22c8ef5e4718a59fd30f99e333f81a3f0e6be9c29962e6b498192b4a437cd92" Jan 25 00:37:12 crc kubenswrapper[4947]: I0125 00:37:12.263609 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7ffht" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.045582 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hsw2k"] Jan 25 00:37:19 crc kubenswrapper[4947]: E0125 00:37:19.048713 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbf27b1-75f2-42e1-87b0-d9a6553993eb" containerName="curl" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.048903 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbf27b1-75f2-42e1-87b0-d9a6553993eb" containerName="curl" Jan 25 00:37:19 crc kubenswrapper[4947]: E0125 00:37:19.049049 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-ceilometer" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.049254 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-ceilometer" Jan 25 00:37:19 crc kubenswrapper[4947]: E0125 00:37:19.049457 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-collectd" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.049631 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-collectd" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.049987 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-ceilometer" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.050185 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6c83c5f-2bee-41a9-8433-391c8e71812b" containerName="smoketest-collectd" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.050327 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbf27b1-75f2-42e1-87b0-d9a6553993eb" containerName="curl" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.051688 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.058668 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.058668 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.058764 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.063328 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hsw2k"] Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.063409 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.063450 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.063450 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124056 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124151 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124419 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124523 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124561 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124849 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.124975 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfprm\" (UniqueName: \"kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227363 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227504 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfprm\" (UniqueName: \"kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227601 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227670 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227833 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227901 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.227966 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.229446 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.230400 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.231090 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.231317 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.234412 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.249412 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.261290 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfprm\" (UniqueName: \"kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm\") pod \"stf-smoketest-smoke1-hsw2k\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.396584 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:19 crc kubenswrapper[4947]: I0125 00:37:19.623054 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hsw2k"] Jan 25 00:37:20 crc kubenswrapper[4947]: I0125 00:37:20.319391 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerStarted","Data":"0560ec9982811d5f213c37bad06bd5e70a8476cdac5f4d9ca483d0737f01458b"} Jan 25 00:37:20 crc kubenswrapper[4947]: I0125 00:37:20.319744 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerStarted","Data":"06d6e388fdecd0356703237e6e0279e4c2a5c45a270cbc144c451b647da6f5c4"} Jan 25 00:37:20 crc kubenswrapper[4947]: I0125 00:37:20.319760 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerStarted","Data":"110cf8b2b20c6c29231bf9ef83d8cfeceac4b9679d2965b70318ddee151b410e"} Jan 25 00:37:21 crc kubenswrapper[4947]: I0125 00:37:21.097999 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:37:21 crc kubenswrapper[4947]: E0125 00:37:21.098744 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:37:32 crc kubenswrapper[4947]: I0125 00:37:32.217300 4947 scope.go:117] "RemoveContainer" containerID="8bd3c19ebf87dfa9492f0b6c526fd93072c03b5bf0b404a53c130aa373ce7a49" Jan 25 00:37:32 crc kubenswrapper[4947]: I0125 00:37:32.261235 4947 scope.go:117] "RemoveContainer" containerID="3b9968f7292e05181c1395dde57498d955080e1dcb49d9484e57992f06fed9b1" Jan 25 00:37:33 crc kubenswrapper[4947]: I0125 00:37:33.090700 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:37:33 crc kubenswrapper[4947]: E0125 00:37:33.091157 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:37:46 crc kubenswrapper[4947]: I0125 00:37:46.090032 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:37:46 crc kubenswrapper[4947]: E0125 00:37:46.091203 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:37:52 crc kubenswrapper[4947]: I0125 00:37:52.692891 4947 generic.go:334] "Generic (PLEG): container finished" podID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerID="0560ec9982811d5f213c37bad06bd5e70a8476cdac5f4d9ca483d0737f01458b" exitCode=0 Jan 25 00:37:52 crc kubenswrapper[4947]: I0125 00:37:52.693013 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerDied","Data":"0560ec9982811d5f213c37bad06bd5e70a8476cdac5f4d9ca483d0737f01458b"} Jan 25 00:37:52 crc kubenswrapper[4947]: I0125 00:37:52.694773 4947 scope.go:117] "RemoveContainer" containerID="0560ec9982811d5f213c37bad06bd5e70a8476cdac5f4d9ca483d0737f01458b" Jan 25 00:37:53 crc kubenswrapper[4947]: I0125 00:37:53.705064 4947 generic.go:334] "Generic (PLEG): container finished" podID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerID="06d6e388fdecd0356703237e6e0279e4c2a5c45a270cbc144c451b647da6f5c4" exitCode=0 Jan 25 00:37:53 crc kubenswrapper[4947]: I0125 00:37:53.705147 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerDied","Data":"06d6e388fdecd0356703237e6e0279e4c2a5c45a270cbc144c451b647da6f5c4"} Jan 25 00:37:54 crc kubenswrapper[4947]: I0125 00:37:54.977741 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061341 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfprm\" (UniqueName: \"kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061436 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061474 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061563 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061583 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061620 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.061644 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script\") pod \"da929df5-1e6a-47d5-85b4-6ec2b408203e\" (UID: \"da929df5-1e6a-47d5-85b4-6ec2b408203e\") " Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.070074 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm" (OuterVolumeSpecName: "kube-api-access-vfprm") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "kube-api-access-vfprm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.079221 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.079376 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.081754 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.083983 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.087717 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.091417 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "da929df5-1e6a-47d5-85b4-6ec2b408203e" (UID: "da929df5-1e6a-47d5-85b4-6ec2b408203e"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.164874 4947 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.164909 4947 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.165005 4947 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.165066 4947 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.165094 4947 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.165117 4947 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/da929df5-1e6a-47d5-85b4-6ec2b408203e-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.165182 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfprm\" (UniqueName: \"kubernetes.io/projected/da929df5-1e6a-47d5-85b4-6ec2b408203e-kube-api-access-vfprm\") on node \"crc\" DevicePath \"\"" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.729627 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" event={"ID":"da929df5-1e6a-47d5-85b4-6ec2b408203e","Type":"ContainerDied","Data":"110cf8b2b20c6c29231bf9ef83d8cfeceac4b9679d2965b70318ddee151b410e"} Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.729744 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hsw2k" Jan 25 00:37:55 crc kubenswrapper[4947]: I0125 00:37:55.729722 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="110cf8b2b20c6c29231bf9ef83d8cfeceac4b9679d2965b70318ddee151b410e" Jan 25 00:37:57 crc kubenswrapper[4947]: I0125 00:37:57.298807 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7ffht_b6c83c5f-2bee-41a9-8433-391c8e71812b/smoketest-collectd/0.log" Jan 25 00:37:57 crc kubenswrapper[4947]: I0125 00:37:57.663527 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7ffht_b6c83c5f-2bee-41a9-8433-391c8e71812b/smoketest-ceilometer/0.log" Jan 25 00:37:58 crc kubenswrapper[4947]: I0125 00:37:58.007823 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-qb5z8_53107577-1f0a-4c8d-b5e5-81e4d415f3a1/default-interconnect/0.log" Jan 25 00:37:58 crc kubenswrapper[4947]: I0125 00:37:58.382009 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf_0d98fa0e-0a1b-4139-b32a-dbc771dc0939/bridge/2.log" Jan 25 00:37:58 crc kubenswrapper[4947]: I0125 00:37:58.668847 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-pffqf_0d98fa0e-0a1b-4139-b32a-dbc771dc0939/sg-core/0.log" Jan 25 00:37:58 crc kubenswrapper[4947]: I0125 00:37:58.996594 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-9c5498458-pjx7f_9b3c0215-a9e0-45e1-a844-c93fd70138c9/bridge/2.log" Jan 25 00:37:59 crc kubenswrapper[4947]: I0125 00:37:59.387812 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-9c5498458-pjx7f_9b3c0215-a9e0-45e1-a844-c93fd70138c9/sg-core/0.log" Jan 25 00:37:59 crc kubenswrapper[4947]: I0125 00:37:59.788050 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n_d0837a20-7313-4da0-9df6-1ce849d1f029/bridge/2.log" Jan 25 00:38:00 crc kubenswrapper[4947]: I0125 00:38:00.121553 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-f8k2n_d0837a20-7313-4da0-9df6-1ce849d1f029/sg-core/0.log" Jan 25 00:38:00 crc kubenswrapper[4947]: I0125 00:38:00.458778 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-899c7f46d-5982c_b022a945-2af3-4275-bc4b-5db0790be691/bridge/2.log" Jan 25 00:38:00 crc kubenswrapper[4947]: I0125 00:38:00.790456 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-899c7f46d-5982c_b022a945-2af3-4275-bc4b-5db0790be691/sg-core/0.log" Jan 25 00:38:01 crc kubenswrapper[4947]: I0125 00:38:01.095294 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:38:01 crc kubenswrapper[4947]: E0125 00:38:01.095624 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:38:01 crc kubenswrapper[4947]: I0125 00:38:01.139489 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm_5ef6c1dd-abb0-4c2f-8aa5-13614c09e445/bridge/2.log" Jan 25 00:38:01 crc kubenswrapper[4947]: I0125 00:38:01.543627 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-cmnlm_5ef6c1dd-abb0-4c2f-8aa5-13614c09e445/sg-core/0.log" Jan 25 00:38:05 crc kubenswrapper[4947]: I0125 00:38:05.191563 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-859d6d5949-k28x7_043c18a1-e602-4917-b73d-5331da5ee62f/operator/0.log" Jan 25 00:38:05 crc kubenswrapper[4947]: I0125 00:38:05.589167 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_3d82b42a-6236-43af-8190-d28e96b2b933/prometheus/0.log" Jan 25 00:38:05 crc kubenswrapper[4947]: I0125 00:38:05.997313 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_ac6cbdf5-f2a1-4e0a-90cb-2d97e1caa9a6/elasticsearch/0.log" Jan 25 00:38:06 crc kubenswrapper[4947]: I0125 00:38:06.377104 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-4gkbj_b6cfc9d0-598c-4149-8a38-ec02ced8d2b8/prometheus-webhook-snmp/0.log" Jan 25 00:38:06 crc kubenswrapper[4947]: I0125 00:38:06.702620 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_4888dfdb-3780-4d4b-ad3a-4c1238a72464/alertmanager/0.log" Jan 25 00:38:12 crc kubenswrapper[4947]: I0125 00:38:12.090639 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:38:12 crc kubenswrapper[4947]: E0125 00:38:12.091490 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:38:22 crc kubenswrapper[4947]: I0125 00:38:22.594173 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-7b5f5cc44d-grff4_ccea2ce4-d212-4599-a152-5a2d53366128/operator/0.log" Jan 25 00:38:26 crc kubenswrapper[4947]: I0125 00:38:26.089423 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:38:26 crc kubenswrapper[4947]: E0125 00:38:26.089816 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:38:26 crc kubenswrapper[4947]: I0125 00:38:26.205139 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-859d6d5949-k28x7_043c18a1-e602-4917-b73d-5331da5ee62f/operator/0.log" Jan 25 00:38:26 crc kubenswrapper[4947]: I0125 00:38:26.528261 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_1e239640-20ad-42e0-8db4-0ada55b1274c/qdr/0.log" Jan 25 00:38:37 crc kubenswrapper[4947]: I0125 00:38:37.089812 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:38:37 crc kubenswrapper[4947]: E0125 00:38:37.091021 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:38:50 crc kubenswrapper[4947]: I0125 00:38:50.089909 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:38:50 crc kubenswrapper[4947]: E0125 00:38:50.091043 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.019307 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8b4ht/must-gather-vgn5q"] Jan 25 00:38:53 crc kubenswrapper[4947]: E0125 00:38:53.019819 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-ceilometer" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.019833 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-ceilometer" Jan 25 00:38:53 crc kubenswrapper[4947]: E0125 00:38:53.019841 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-collectd" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.019847 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-collectd" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.019966 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-ceilometer" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.019979 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="da929df5-1e6a-47d5-85b4-6ec2b408203e" containerName="smoketest-collectd" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.020700 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.022631 4947 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8b4ht"/"default-dockercfg-mhw8v" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.022686 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8b4ht"/"kube-root-ca.crt" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.023227 4947 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8b4ht"/"openshift-service-ca.crt" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.042758 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8b4ht/must-gather-vgn5q"] Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.119858 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwpmj\" (UniqueName: \"kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.119925 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.221436 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwpmj\" (UniqueName: \"kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.221491 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.221995 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.242669 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwpmj\" (UniqueName: \"kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj\") pod \"must-gather-vgn5q\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.342831 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.579907 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8b4ht/must-gather-vgn5q"] Jan 25 00:38:53 crc kubenswrapper[4947]: I0125 00:38:53.592612 4947 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 25 00:38:54 crc kubenswrapper[4947]: I0125 00:38:54.268532 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" event={"ID":"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8","Type":"ContainerStarted","Data":"6ac178507348be2e0cd69ce208e03c0c37c472f9596314eaf57fe56b2f76bc78"} Jan 25 00:39:01 crc kubenswrapper[4947]: I0125 00:39:01.332970 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" event={"ID":"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8","Type":"ContainerStarted","Data":"447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0"} Jan 25 00:39:01 crc kubenswrapper[4947]: I0125 00:39:01.333594 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" event={"ID":"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8","Type":"ContainerStarted","Data":"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2"} Jan 25 00:39:02 crc kubenswrapper[4947]: I0125 00:39:02.090046 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:39:02 crc kubenswrapper[4947]: E0125 00:39:02.090307 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:39:13 crc kubenswrapper[4947]: I0125 00:39:13.089543 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:39:13 crc kubenswrapper[4947]: E0125 00:39:13.090522 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:39:24 crc kubenswrapper[4947]: I0125 00:39:24.090096 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:39:24 crc kubenswrapper[4947]: E0125 00:39:24.092853 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:39:39 crc kubenswrapper[4947]: I0125 00:39:39.090189 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:39:39 crc kubenswrapper[4947]: E0125 00:39:39.091034 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:39:48 crc kubenswrapper[4947]: I0125 00:39:48.442170 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-vwtw5_f56c1338-08c8-47de-b24a-3aaf85e315f8/control-plane-machine-set-operator/0.log" Jan 25 00:39:48 crc kubenswrapper[4947]: I0125 00:39:48.557822 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h6jgn_b8f2f610-05dc-49ea-882e-634d283b3caa/kube-rbac-proxy/0.log" Jan 25 00:39:48 crc kubenswrapper[4947]: I0125 00:39:48.634941 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-h6jgn_b8f2f610-05dc-49ea-882e-634d283b3caa/machine-api-operator/0.log" Jan 25 00:39:53 crc kubenswrapper[4947]: I0125 00:39:53.089957 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:39:53 crc kubenswrapper[4947]: E0125 00:39:53.090865 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:40:02 crc kubenswrapper[4947]: I0125 00:40:02.077190 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-tgcft_6beb1442-5e99-4164-8077-50d6eb5dbd44/cert-manager-controller/0.log" Jan 25 00:40:02 crc kubenswrapper[4947]: I0125 00:40:02.193681 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-cqxvz_c215f860-08a3-4dbd-b7f2-426286319aa8/cert-manager-cainjector/0.log" Jan 25 00:40:02 crc kubenswrapper[4947]: I0125 00:40:02.279387 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-wqxxr_d860ec8b-2f41-4b81-8868-9b078b55b341/cert-manager-webhook/0.log" Jan 25 00:40:08 crc kubenswrapper[4947]: I0125 00:40:08.089440 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:40:08 crc kubenswrapper[4947]: E0125 00:40:08.091516 4947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mdgrh_openshift-machine-config-operator(5f67ec28-baae-409e-a42d-03a486e7a26b)\"" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" Jan 25 00:40:16 crc kubenswrapper[4947]: I0125 00:40:16.673543 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-wjw4s_3e662e75-c8ba-4da8-856f-9fc73a2316aa/prometheus-operator/0.log" Jan 25 00:40:16 crc kubenswrapper[4947]: I0125 00:40:16.853445 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k_ae208ca2-2ac2-4a6a-b88e-127c986f32a5/prometheus-operator-admission-webhook/0.log" Jan 25 00:40:16 crc kubenswrapper[4947]: I0125 00:40:16.890610 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx_a3860bf6-f86b-4206-a225-6fa61372a988/prometheus-operator-admission-webhook/0.log" Jan 25 00:40:17 crc kubenswrapper[4947]: I0125 00:40:17.049579 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4v5sm_9d3adf01-5529-4edb-9b7f-f3c782156a8d/operator/0.log" Jan 25 00:40:17 crc kubenswrapper[4947]: I0125 00:40:17.081814 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-qz44g_38944919-0d65-4fdd-b2bd-2780f8e77bde/perses-operator/0.log" Jan 25 00:40:21 crc kubenswrapper[4947]: I0125 00:40:21.097743 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:40:22 crc kubenswrapper[4947]: I0125 00:40:22.065868 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"7004e8129c6b88f58bbbee7984e53d0d93f2f96afa9dd97b9bbb53a49f0a5277"} Jan 25 00:40:22 crc kubenswrapper[4947]: I0125 00:40:22.082403 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" podStartSLOduration=83.566024468 podStartE2EDuration="1m30.082384649s" podCreationTimestamp="2026-01-25 00:38:52 +0000 UTC" firstStartedPulling="2026-01-25 00:38:53.592303303 +0000 UTC m=+1772.825293753" lastFinishedPulling="2026-01-25 00:39:00.108663494 +0000 UTC m=+1779.341653934" observedRunningTime="2026-01-25 00:39:01.351048262 +0000 UTC m=+1780.584038712" watchObservedRunningTime="2026-01-25 00:40:22.082384649 +0000 UTC m=+1861.315375089" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.076001 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/util/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.229410 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/util/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.254949 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/pull/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.299328 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/pull/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.580014 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/util/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.593038 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/extract/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.595754 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931ahnvvf_373809d6-f72c-4eff-afeb-1fa942bb9e22/pull/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.727906 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/util/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.948982 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/pull/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.949892 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/pull/0.log" Jan 25 00:40:33 crc kubenswrapper[4947]: I0125 00:40:33.950321 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/util/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.151723 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/extract/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.158819 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/util/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.165968 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fqgfnk_b18bd971-05aa-4366-8829-6d2db0f3a1a0/pull/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.356910 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/util/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.491897 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/util/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.529923 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/pull/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.532151 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/pull/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.689659 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/util/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.731029 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/extract/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.751825 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ezqxtm_1cfc506e-cb97-4eb4-a967-d5ea940b5ce9/pull/0.log" Jan 25 00:40:34 crc kubenswrapper[4947]: I0125 00:40:34.863080 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/util/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.071824 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/util/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.102028 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/pull/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.120949 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/pull/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.362495 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/util/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.397841 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/extract/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.400902 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082xnlw_e1924f8a-318d-4d3b-ada5-703cf399beed/pull/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.553104 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-utilities/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.739460 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-utilities/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.739570 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-content/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.788013 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-content/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.946237 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-utilities/0.log" Jan 25 00:40:35 crc kubenswrapper[4947]: I0125 00:40:35.952713 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/extract-content/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.155850 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-utilities/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.244753 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-lkvvh_8f150ea3-0af6-4206-9d74-e15f901e571b/registry-server/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.394249 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-content/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.397834 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-utilities/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.434064 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-content/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.552064 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-utilities/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.683492 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/extract-content/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.747091 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-g7982_5e39c693-6291-4810-863e-fd3e5cd45fbc/registry-server/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.763004 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-mbj6z_94a09856-1120-4003-a601-ee3c9121eb51/marketplace-operator/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.869580 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-utilities/0.log" Jan 25 00:40:36 crc kubenswrapper[4947]: I0125 00:40:36.981036 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-utilities/0.log" Jan 25 00:40:37 crc kubenswrapper[4947]: I0125 00:40:37.024339 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-content/0.log" Jan 25 00:40:37 crc kubenswrapper[4947]: I0125 00:40:37.046938 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-content/0.log" Jan 25 00:40:37 crc kubenswrapper[4947]: I0125 00:40:37.217426 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-content/0.log" Jan 25 00:40:37 crc kubenswrapper[4947]: I0125 00:40:37.307353 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/extract-utilities/0.log" Jan 25 00:40:37 crc kubenswrapper[4947]: I0125 00:40:37.498607 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-m2ddl_e8adfaf1-4e17-430c-970e-1cbf2e58c18a/registry-server/0.log" Jan 25 00:40:50 crc kubenswrapper[4947]: I0125 00:40:50.977277 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7b894944f4-qvwgx_a3860bf6-f86b-4206-a225-6fa61372a988/prometheus-operator-admission-webhook/0.log" Jan 25 00:40:50 crc kubenswrapper[4947]: I0125 00:40:50.981795 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-wjw4s_3e662e75-c8ba-4da8-856f-9fc73a2316aa/prometheus-operator/0.log" Jan 25 00:40:51 crc kubenswrapper[4947]: I0125 00:40:51.001391 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7b894944f4-fvb2k_ae208ca2-2ac2-4a6a-b88e-127c986f32a5/prometheus-operator-admission-webhook/0.log" Jan 25 00:40:51 crc kubenswrapper[4947]: I0125 00:40:51.148315 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-4v5sm_9d3adf01-5529-4edb-9b7f-f3c782156a8d/operator/0.log" Jan 25 00:40:51 crc kubenswrapper[4947]: I0125 00:40:51.194534 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-qz44g_38944919-0d65-4fdd-b2bd-2780f8e77bde/perses-operator/0.log" Jan 25 00:41:40 crc kubenswrapper[4947]: I0125 00:41:40.740445 4947 generic.go:334] "Generic (PLEG): container finished" podID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerID="b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2" exitCode=0 Jan 25 00:41:40 crc kubenswrapper[4947]: I0125 00:41:40.740574 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" event={"ID":"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8","Type":"ContainerDied","Data":"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2"} Jan 25 00:41:40 crc kubenswrapper[4947]: I0125 00:41:40.741771 4947 scope.go:117] "RemoveContainer" containerID="b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2" Jan 25 00:41:41 crc kubenswrapper[4947]: I0125 00:41:41.517745 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8b4ht_must-gather-vgn5q_1ea54a3f-8bf0-481a-ae91-236c89f6e1f8/gather/0.log" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.292793 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8b4ht/must-gather-vgn5q"] Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.293821 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="copy" containerID="cri-o://447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0" gracePeriod=2 Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.301071 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8b4ht/must-gather-vgn5q"] Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.681015 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8b4ht_must-gather-vgn5q_1ea54a3f-8bf0-481a-ae91-236c89f6e1f8/copy/0.log" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.681954 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.721329 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwpmj\" (UniqueName: \"kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj\") pod \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.721728 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output\") pod \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\" (UID: \"1ea54a3f-8bf0-481a-ae91-236c89f6e1f8\") " Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.727665 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj" (OuterVolumeSpecName: "kube-api-access-fwpmj") pod "1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" (UID: "1ea54a3f-8bf0-481a-ae91-236c89f6e1f8"). InnerVolumeSpecName "kube-api-access-fwpmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.789077 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" (UID: "1ea54a3f-8bf0-481a-ae91-236c89f6e1f8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.807046 4947 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8b4ht_must-gather-vgn5q_1ea54a3f-8bf0-481a-ae91-236c89f6e1f8/copy/0.log" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.807521 4947 generic.go:334] "Generic (PLEG): container finished" podID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerID="447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0" exitCode=143 Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.807573 4947 scope.go:117] "RemoveContainer" containerID="447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.807618 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8b4ht/must-gather-vgn5q" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.824709 4947 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.824742 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwpmj\" (UniqueName: \"kubernetes.io/projected/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8-kube-api-access-fwpmj\") on node \"crc\" DevicePath \"\"" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.830257 4947 scope.go:117] "RemoveContainer" containerID="b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.868827 4947 scope.go:117] "RemoveContainer" containerID="447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0" Jan 25 00:41:48 crc kubenswrapper[4947]: E0125 00:41:48.869270 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0\": container with ID starting with 447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0 not found: ID does not exist" containerID="447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.869307 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0"} err="failed to get container status \"447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0\": rpc error: code = NotFound desc = could not find container \"447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0\": container with ID starting with 447fa340031959fd477ba648029cbb51815705eb97cfea45ead46c65e9bcdac0 not found: ID does not exist" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.869329 4947 scope.go:117] "RemoveContainer" containerID="b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2" Jan 25 00:41:48 crc kubenswrapper[4947]: E0125 00:41:48.869715 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2\": container with ID starting with b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2 not found: ID does not exist" containerID="b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2" Jan 25 00:41:48 crc kubenswrapper[4947]: I0125 00:41:48.869741 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2"} err="failed to get container status \"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2\": rpc error: code = NotFound desc = could not find container \"b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2\": container with ID starting with b1c63a19ac62e7c38877eab36ec2d473bacdfd664c8d116753afb4e920152ce2 not found: ID does not exist" Jan 25 00:41:49 crc kubenswrapper[4947]: I0125 00:41:49.097354 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" path="/var/lib/kubelet/pods/1ea54a3f-8bf0-481a-ae91-236c89f6e1f8/volumes" Jan 25 00:42:47 crc kubenswrapper[4947]: I0125 00:42:47.073623 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:42:47 crc kubenswrapper[4947]: I0125 00:42:47.074146 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.346020 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:00 crc kubenswrapper[4947]: E0125 00:43:00.347190 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="copy" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.347219 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="copy" Jan 25 00:43:00 crc kubenswrapper[4947]: E0125 00:43:00.347271 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="gather" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.347287 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="gather" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.347523 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="gather" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.347557 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ea54a3f-8bf0-481a-ae91-236c89f6e1f8" containerName="copy" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.349646 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.381364 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.412448 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.412571 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.412670 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffkv4\" (UniqueName: \"kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.514624 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.515112 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.515263 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffkv4\" (UniqueName: \"kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.515401 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.515917 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.536687 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffkv4\" (UniqueName: \"kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4\") pod \"redhat-operators-594cn\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.686293 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:00 crc kubenswrapper[4947]: I0125 00:43:00.943094 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:01 crc kubenswrapper[4947]: I0125 00:43:01.512610 4947 generic.go:334] "Generic (PLEG): container finished" podID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerID="2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba" exitCode=0 Jan 25 00:43:01 crc kubenswrapper[4947]: I0125 00:43:01.512655 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerDied","Data":"2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba"} Jan 25 00:43:01 crc kubenswrapper[4947]: I0125 00:43:01.512682 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerStarted","Data":"7919acf83a48215938a002c1ea9840e60c59f2adaa6efa5c698adc20f460037f"} Jan 25 00:43:02 crc kubenswrapper[4947]: I0125 00:43:02.521248 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerStarted","Data":"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f"} Jan 25 00:43:03 crc kubenswrapper[4947]: I0125 00:43:03.534046 4947 generic.go:334] "Generic (PLEG): container finished" podID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerID="fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f" exitCode=0 Jan 25 00:43:03 crc kubenswrapper[4947]: I0125 00:43:03.534154 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerDied","Data":"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f"} Jan 25 00:43:04 crc kubenswrapper[4947]: I0125 00:43:04.552709 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerStarted","Data":"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc"} Jan 25 00:43:10 crc kubenswrapper[4947]: I0125 00:43:10.687086 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:10 crc kubenswrapper[4947]: I0125 00:43:10.687688 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:11 crc kubenswrapper[4947]: I0125 00:43:11.744732 4947 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-594cn" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="registry-server" probeResult="failure" output=< Jan 25 00:43:11 crc kubenswrapper[4947]: timeout: failed to connect service ":50051" within 1s Jan 25 00:43:11 crc kubenswrapper[4947]: > Jan 25 00:43:17 crc kubenswrapper[4947]: I0125 00:43:17.072903 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:43:17 crc kubenswrapper[4947]: I0125 00:43:17.073541 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:43:20 crc kubenswrapper[4947]: I0125 00:43:20.762913 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:20 crc kubenswrapper[4947]: I0125 00:43:20.795819 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-594cn" podStartSLOduration=18.300742681 podStartE2EDuration="20.795790005s" podCreationTimestamp="2026-01-25 00:43:00 +0000 UTC" firstStartedPulling="2026-01-25 00:43:01.514348162 +0000 UTC m=+2020.747338592" lastFinishedPulling="2026-01-25 00:43:04.009395436 +0000 UTC m=+2023.242385916" observedRunningTime="2026-01-25 00:43:04.582218234 +0000 UTC m=+2023.815208744" watchObservedRunningTime="2026-01-25 00:43:20.795790005 +0000 UTC m=+2040.028780485" Jan 25 00:43:20 crc kubenswrapper[4947]: I0125 00:43:20.841932 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:21 crc kubenswrapper[4947]: I0125 00:43:21.010706 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:22 crc kubenswrapper[4947]: I0125 00:43:22.846322 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-594cn" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="registry-server" containerID="cri-o://9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc" gracePeriod=2 Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.297740 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.461722 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content\") pod \"79992216-2d93-4b69-99c7-1c9ae5f449ec\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.461936 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities\") pod \"79992216-2d93-4b69-99c7-1c9ae5f449ec\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.463427 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities" (OuterVolumeSpecName: "utilities") pod "79992216-2d93-4b69-99c7-1c9ae5f449ec" (UID: "79992216-2d93-4b69-99c7-1c9ae5f449ec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.463789 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffkv4\" (UniqueName: \"kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4\") pod \"79992216-2d93-4b69-99c7-1c9ae5f449ec\" (UID: \"79992216-2d93-4b69-99c7-1c9ae5f449ec\") " Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.464527 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.473371 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4" (OuterVolumeSpecName: "kube-api-access-ffkv4") pod "79992216-2d93-4b69-99c7-1c9ae5f449ec" (UID: "79992216-2d93-4b69-99c7-1c9ae5f449ec"). InnerVolumeSpecName "kube-api-access-ffkv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.565993 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffkv4\" (UniqueName: \"kubernetes.io/projected/79992216-2d93-4b69-99c7-1c9ae5f449ec-kube-api-access-ffkv4\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.654003 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79992216-2d93-4b69-99c7-1c9ae5f449ec" (UID: "79992216-2d93-4b69-99c7-1c9ae5f449ec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.666863 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79992216-2d93-4b69-99c7-1c9ae5f449ec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.858599 4947 generic.go:334] "Generic (PLEG): container finished" podID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerID="9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc" exitCode=0 Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.858661 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerDied","Data":"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc"} Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.858680 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-594cn" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.858699 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-594cn" event={"ID":"79992216-2d93-4b69-99c7-1c9ae5f449ec","Type":"ContainerDied","Data":"7919acf83a48215938a002c1ea9840e60c59f2adaa6efa5c698adc20f460037f"} Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.858727 4947 scope.go:117] "RemoveContainer" containerID="9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.894539 4947 scope.go:117] "RemoveContainer" containerID="fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.895674 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.903904 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-594cn"] Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.925831 4947 scope.go:117] "RemoveContainer" containerID="2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.955427 4947 scope.go:117] "RemoveContainer" containerID="9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc" Jan 25 00:43:23 crc kubenswrapper[4947]: E0125 00:43:23.956019 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc\": container with ID starting with 9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc not found: ID does not exist" containerID="9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.956058 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc"} err="failed to get container status \"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc\": rpc error: code = NotFound desc = could not find container \"9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc\": container with ID starting with 9f4edd4a63622155fc48c592e0ae4ce3f9dabe2e2846d5d248c193db25081efc not found: ID does not exist" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.956085 4947 scope.go:117] "RemoveContainer" containerID="fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f" Jan 25 00:43:23 crc kubenswrapper[4947]: E0125 00:43:23.956433 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f\": container with ID starting with fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f not found: ID does not exist" containerID="fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.956474 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f"} err="failed to get container status \"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f\": rpc error: code = NotFound desc = could not find container \"fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f\": container with ID starting with fbef36cf88ea2229f80dbf75741e05911afcc1a91b0c0fdd76fe3ab04136f94f not found: ID does not exist" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.956499 4947 scope.go:117] "RemoveContainer" containerID="2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba" Jan 25 00:43:23 crc kubenswrapper[4947]: E0125 00:43:23.956827 4947 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba\": container with ID starting with 2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba not found: ID does not exist" containerID="2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba" Jan 25 00:43:23 crc kubenswrapper[4947]: I0125 00:43:23.956862 4947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba"} err="failed to get container status \"2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba\": rpc error: code = NotFound desc = could not find container \"2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba\": container with ID starting with 2c0324f54649ae756699bc76357a9068099e1bf6c32259095868b87bd7de27ba not found: ID does not exist" Jan 25 00:43:25 crc kubenswrapper[4947]: I0125 00:43:25.108003 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" path="/var/lib/kubelet/pods/79992216-2d93-4b69-99c7-1c9ae5f449ec/volumes" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.692042 4947 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:32 crc kubenswrapper[4947]: E0125 00:43:32.693822 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="extract-content" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.693852 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="extract-content" Jan 25 00:43:32 crc kubenswrapper[4947]: E0125 00:43:32.693884 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="extract-utilities" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.693897 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="extract-utilities" Jan 25 00:43:32 crc kubenswrapper[4947]: E0125 00:43:32.693932 4947 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="registry-server" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.693950 4947 state_mem.go:107] "Deleted CPUSet assignment" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="registry-server" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.694231 4947 memory_manager.go:354] "RemoveStaleState removing state" podUID="79992216-2d93-4b69-99c7-1c9ae5f449ec" containerName="registry-server" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.695838 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.697096 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.812388 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.812442 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.812503 4947 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th82f\" (UniqueName: \"kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.913806 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th82f\" (UniqueName: \"kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.914065 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.914185 4947 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.914634 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.914762 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:32 crc kubenswrapper[4947]: I0125 00:43:32.932686 4947 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th82f\" (UniqueName: \"kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f\") pod \"community-operators-2n7hf\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:33 crc kubenswrapper[4947]: I0125 00:43:33.021788 4947 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:33 crc kubenswrapper[4947]: I0125 00:43:33.552802 4947 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:33 crc kubenswrapper[4947]: W0125 00:43:33.570212 4947 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cb7ba49_d823_4f53_b090_c6bcb63d57fc.slice/crio-c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230 WatchSource:0}: Error finding container c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230: Status 404 returned error can't find the container with id c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230 Jan 25 00:43:33 crc kubenswrapper[4947]: I0125 00:43:33.962082 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cb7ba49-d823-4f53-b090-c6bcb63d57fc" containerID="ce06e6e7a8b9450315d329689672bfe26eafdb550594e52df21e546cba8ec88e" exitCode=0 Jan 25 00:43:33 crc kubenswrapper[4947]: I0125 00:43:33.962165 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerDied","Data":"ce06e6e7a8b9450315d329689672bfe26eafdb550594e52df21e546cba8ec88e"} Jan 25 00:43:33 crc kubenswrapper[4947]: I0125 00:43:33.962538 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerStarted","Data":"c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230"} Jan 25 00:43:34 crc kubenswrapper[4947]: I0125 00:43:34.972273 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerStarted","Data":"0898891484ecb7265ed7cfc5e280994b414f950f0ed19b2484c3bb95f332415a"} Jan 25 00:43:35 crc kubenswrapper[4947]: I0125 00:43:35.985309 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cb7ba49-d823-4f53-b090-c6bcb63d57fc" containerID="0898891484ecb7265ed7cfc5e280994b414f950f0ed19b2484c3bb95f332415a" exitCode=0 Jan 25 00:43:35 crc kubenswrapper[4947]: I0125 00:43:35.985406 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerDied","Data":"0898891484ecb7265ed7cfc5e280994b414f950f0ed19b2484c3bb95f332415a"} Jan 25 00:43:36 crc kubenswrapper[4947]: I0125 00:43:36.994662 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerStarted","Data":"9f06d0ad16185fe626e595b85509c4907607d6f1f3f1aa98675a6d73f0f530b7"} Jan 25 00:43:37 crc kubenswrapper[4947]: I0125 00:43:37.033697 4947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2n7hf" podStartSLOduration=2.48280829 podStartE2EDuration="5.033672843s" podCreationTimestamp="2026-01-25 00:43:32 +0000 UTC" firstStartedPulling="2026-01-25 00:43:33.9637048 +0000 UTC m=+2053.196695240" lastFinishedPulling="2026-01-25 00:43:36.514569323 +0000 UTC m=+2055.747559793" observedRunningTime="2026-01-25 00:43:37.025492226 +0000 UTC m=+2056.258482686" watchObservedRunningTime="2026-01-25 00:43:37.033672843 +0000 UTC m=+2056.266663313" Jan 25 00:43:43 crc kubenswrapper[4947]: I0125 00:43:43.022505 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:43 crc kubenswrapper[4947]: I0125 00:43:43.023182 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:43 crc kubenswrapper[4947]: I0125 00:43:43.100706 4947 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:43 crc kubenswrapper[4947]: I0125 00:43:43.170877 4947 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:43 crc kubenswrapper[4947]: I0125 00:43:43.344860 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:45 crc kubenswrapper[4947]: I0125 00:43:45.071727 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2n7hf" podUID="2cb7ba49-d823-4f53-b090-c6bcb63d57fc" containerName="registry-server" containerID="cri-o://9f06d0ad16185fe626e595b85509c4907607d6f1f3f1aa98675a6d73f0f530b7" gracePeriod=2 Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.079867 4947 generic.go:334] "Generic (PLEG): container finished" podID="2cb7ba49-d823-4f53-b090-c6bcb63d57fc" containerID="9f06d0ad16185fe626e595b85509c4907607d6f1f3f1aa98675a6d73f0f530b7" exitCode=0 Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.079953 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerDied","Data":"9f06d0ad16185fe626e595b85509c4907607d6f1f3f1aa98675a6d73f0f530b7"} Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.080218 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2n7hf" event={"ID":"2cb7ba49-d823-4f53-b090-c6bcb63d57fc","Type":"ContainerDied","Data":"c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230"} Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.080235 4947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c495e3d9497b30a3ea665c2ada21749bfeb4828baad2fe1d515c93807d553230" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.082839 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.128276 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content\") pod \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.128367 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities\") pod \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.128427 4947 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th82f\" (UniqueName: \"kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f\") pod \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\" (UID: \"2cb7ba49-d823-4f53-b090-c6bcb63d57fc\") " Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.129348 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities" (OuterVolumeSpecName: "utilities") pod "2cb7ba49-d823-4f53-b090-c6bcb63d57fc" (UID: "2cb7ba49-d823-4f53-b090-c6bcb63d57fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.133761 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f" (OuterVolumeSpecName: "kube-api-access-th82f") pod "2cb7ba49-d823-4f53-b090-c6bcb63d57fc" (UID: "2cb7ba49-d823-4f53-b090-c6bcb63d57fc"). InnerVolumeSpecName "kube-api-access-th82f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.208699 4947 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cb7ba49-d823-4f53-b090-c6bcb63d57fc" (UID: "2cb7ba49-d823-4f53-b090-c6bcb63d57fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.229735 4947 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th82f\" (UniqueName: \"kubernetes.io/projected/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-kube-api-access-th82f\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.229809 4947 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:46 crc kubenswrapper[4947]: I0125 00:43:46.229835 4947 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cb7ba49-d823-4f53-b090-c6bcb63d57fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.072808 4947 patch_prober.go:28] interesting pod/machine-config-daemon-mdgrh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.073392 4947 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.073487 4947 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.074691 4947 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7004e8129c6b88f58bbbee7984e53d0d93f2f96afa9dd97b9bbb53a49f0a5277"} pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.074834 4947 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" podUID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerName="machine-config-daemon" containerID="cri-o://7004e8129c6b88f58bbbee7984e53d0d93f2f96afa9dd97b9bbb53a49f0a5277" gracePeriod=600 Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.086956 4947 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2n7hf" Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.134199 4947 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:47 crc kubenswrapper[4947]: I0125 00:43:47.140964 4947 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2n7hf"] Jan 25 00:43:48 crc kubenswrapper[4947]: I0125 00:43:48.106061 4947 generic.go:334] "Generic (PLEG): container finished" podID="5f67ec28-baae-409e-a42d-03a486e7a26b" containerID="7004e8129c6b88f58bbbee7984e53d0d93f2f96afa9dd97b9bbb53a49f0a5277" exitCode=0 Jan 25 00:43:48 crc kubenswrapper[4947]: I0125 00:43:48.106188 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerDied","Data":"7004e8129c6b88f58bbbee7984e53d0d93f2f96afa9dd97b9bbb53a49f0a5277"} Jan 25 00:43:48 crc kubenswrapper[4947]: I0125 00:43:48.106572 4947 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mdgrh" event={"ID":"5f67ec28-baae-409e-a42d-03a486e7a26b","Type":"ContainerStarted","Data":"89af1b195462f08a0ce6aa251460370d11275e9bef877a52b4c4ddedf0d1bce6"} Jan 25 00:43:48 crc kubenswrapper[4947]: I0125 00:43:48.106598 4947 scope.go:117] "RemoveContainer" containerID="71fbc8e0a3bdf08dc8f498fb65112d72875d2bbc91c89884e55fbc8dbb92266f" Jan 25 00:43:49 crc kubenswrapper[4947]: I0125 00:43:49.104378 4947 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb7ba49-d823-4f53-b090-c6bcb63d57fc" path="/var/lib/kubelet/pods/2cb7ba49-d823-4f53-b090-c6bcb63d57fc/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515135263541024452 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015135263542017370 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015135257135016514 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015135257135015464 5ustar corecore